Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
4749
views
Group by latest date with conditions
I need your help! Let's say I have this table: Instance|Date |MetricID|Value |--| --- | --- |---|---| |Marc | 09/14/21|1|5 |Marc |09/14/21|2|2 |Marc |09/14/21|3|1 |John | 09/14/21|1|10 |John |09/14/21|2|1 |John |09/14/21|3|1 |Marc | 09/15/21|1|15 |Marc |09/15/21|2|0 |Marc |09/15/21|3|1 |John |09/15/...
I need your help!
Let's say I have this table:
Instance|Date |MetricID|Value
|--| --- | --- |---|---|
|Marc | 09/14/21|1|5
|Marc |09/14/21|2|2
|Marc |09/14/21|3|1
|John | 09/14/21|1|10
|John |09/14/21|2|1
|John |09/14/21|3|1
|Marc | 09/15/21|1|15
|Marc |09/15/21|2|0
|Marc |09/15/21|3|1
|John |09/15/21|1|10
|John |09/15/21|2|1
|John |09/15/21|3|0
And I want this:
Instance|LatestDateMetric1 |LatestDateMetric2|LatestDateMetric3
|--| --- | --- |---|---|
|Marc | 09/15/21|09/14/21|09/15/21
|John |09/15/21|09/15/21|09/14/21
I tried this code, It looks a bit like I want except It takes the value even if it is null and the result is by line not column.
SELECT "Instance", "MetricID", MAX("Date") as "LatestDate"
FROM "API_Metric2"
GROUP BY "Instance", "MetricID"
This is the result I got:
Instance|MetricID|LatestDate
|--| --- | --- |---|---|
|Marc |1|09/15/21|1
|Marc |2|09/15/21|2
|Marc |3|09/15/21|3
|John |1| 09/15/21|1
|John |2|09/15/21|2|
|John |3|09/15/21|3
And I also tried this:
SELECT "Instance",
CASE WHEN "MetricID"=1 AND "Value" NOT NULL THEN MAX("Date") ELSE 0 END) AS "LatestDateMetric1",
CASE WHEN "MetricID"=2 AND "Value" NOT NULL THEN MAX("Date") ELSE 0 END) AS "LatestDateMetric2",
CASE WHEN "MetricID"=3 AND "Value" NOT NULL THEN MAX("Date") ELSE 0 END) AS "LatestDateMetric3"
FROM "StackExemple"
GROUP BY "Instance", "Date", "MetricID"
But I get this error message:
> Parse error at line 2, column 66. Encountered: "Value"
***Edit***: I also got this code which seems to be working but It's not taking the null values into account. It only display 09/15/21 as the LatestDate for all metrics.
SELECT "InstanceName",
MAX(CASE WHEN "MetricID" = 4 THEN "Date" END) as "LatestProjectCreated",
MAX(CASE WHEN "MetricID" = 5 THEN "Date" END) as "LatestActionCreated",
MAX(CASE WHEN "MetricID" = 8 THEN "Date" END) as "LatestUserCreated"
FROM "API_InstanceMetric"
GROUP BY "InstanceName";
***Edit2***: The issue persists even with adding the "Value IS NOT NULL" as below
SELECT "Instance",
MAX(CASE WHEN "MetricID" = 1 AND "Value" IS NOT NULL THEN "Date" END) as "LatestProjectCreated",
MAX(CASE WHEN "MetricID" = 2 AND "Value" IS NOT NULL THEN "Date" END) as "LatestActionCreated",
MAX(CASE WHEN "MetricID" = 3 AND "Value" IS NOT NULL THEN "Date" END) as "LatestUserCreated" FROM "StackExemple" GROUP BY "Instance";
no name sry
(11 rep)
Sep 15, 2021, 10:05 AM
• Last activity: Aug 6, 2025, 04:09 PM
1
votes
1
answers
469
views
Moving to innodb_file_per_table?
I have just noticed (unfortunately while trying to restore from backup), that on my production server there is a 35GB ibdata1 file! As I understand I can reduce its size by moving to innodb_file_per_table. However following the answers I have seen here (link below), you have to dump and drop all dat...
I have just noticed (unfortunately while trying to restore from backup), that on my production server there is a 35GB ibdata1 file!
As I understand I can reduce its size by moving to innodb_file_per_table.
However following the answers I have seen here (link below), you have to dump and drop all databases before you can start the conversion.
How long would it take to do this convert?
Is it possible to stagger the convert and do it one database at a time?
Thanks.
https://dba.stackexchange.com/questions/8982/what-is-the-best-way-to-reduce-the-size-of-ibdata-in-mysql
pwnyrainbow
(19 rep)
Nov 25, 2015, 12:37 PM
• Last activity: Aug 6, 2025, 02:00 PM
0
votes
1
answers
776
views
ERROR 1071 (42000): Specified key was too long; max key length is 1000 bytes
An online optimization tool offered to add indexes. One of the index is successfully created but i cant create the second one. Do you think that indexes going to reduce query run time? Any ideas how can i pass through the error code? SQL Query takes 10.1 secs and got over 380 times execution in slow...
An online optimization tool offered to add indexes. One of the index is successfully created but i cant create the second one.
Do you think that indexes going to reduce query run time?
Any ideas how can i pass through the error code?
SQL Query takes 10.1 secs and got over 380 times execution in slow log.
SELECT
l.ID,
post_title,
post_content,
post_name,
post_parent,
post_author,
post_status,
post_modified_gmt,
post_date,
post_date_gmt
FROM
(
SELECT
wp_posts.ID
FROM
wp_posts
WHERE
wp_posts.post_status IN ('publish')
AND wp_posts.post_type = 'post'
AND wp_posts.post_password = ''
AND wp_posts.post_date != '0000-00-00 00:00:00'
ORDER BY
wp_posts.post_modified ASC
LIMIT
100 OFFSET 214000
) o
JOIN wp_posts l ON l.ID = o.ID
Indexes that i need to create;
MariaDB [database]> ALTER TABLE
wp_posts
ADD INDEX wp_posts_idx_post_type_post_passw_post_statu
(post_type
,post_password
,post_status
);
> ERROR 1071 (42000): Specified key was too long; max key length is 1000
> bytes
MariaDB [database]> ALTER TABLE wp_posts
ADD INDEX wp_posts_idx_post_modified
(post_modified
);
Query OK, 453289 rows affected (10.839 sec)
Records: 453289 Duplicates: 0 Warnings: 0
---
CREATE TABLE wp_posts
(
ID
bigint(20) unsigned NOT NULL AUTO_INCREMENT,
post_author
bigint(20) unsigned NOT NULL DEFAULT 0,
post_date
datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
post_date_gmt
datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
post_content
longtext COLLATE utf8mb4_unicode_ci NOT NULL,
post_title
text COLLATE utf8mb4_unicode_ci NOT NULL,
post_excerpt
text COLLATE utf8mb4_unicode_ci NOT NULL,
post_status
varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'publish',
comment_status
varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open',
ping_status
varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open',
post_password
varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
post_name
varchar(200) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
to_ping
text COLLATE utf8mb4_unicode_ci NOT NULL,
pinged
text COLLATE utf8mb4_unicode_ci NOT NULL,
post_modified
datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
post_modified_gmt
datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
post_content_filtered
longtext COLLATE utf8mb4_unicode_ci NOT NULL,
post_parent
bigint(20) unsigned NOT NULL DEFAULT 0,
guid
varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
menu_order
int(11) NOT NULL DEFAULT 0,
post_type
varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'post',
post_mime_type
varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
comment_count
bigint(20) NOT NULL DEFAULT 0,
PRIMARY KEY (ID
),
KEY post_name
(post_name
(191)),
KEY type_status_date
(post_type
,post_status
,post_date
,ID
),
KEY post_parent
(post_parent
),
KEY post_author
(post_author
),
KEY wp_posts_idx_post_modified
(post_modified
)
) ENGINE=MyISAM AUTO_INCREMENT=463265 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
execash
(1 rep)
May 8, 2020, 06:44 AM
• Last activity: Aug 6, 2025, 01:04 PM
1
votes
1
answers
1420
views
MySQLdump Exclude master-data
Is it possible to exclude the master data from a mysql dump? From the manual it seems like the only options are commented out or present. I don't need the data at all because it is going to a static system. Commented out will work but just wondering if `0` or some other value would make it not prese...
Is it possible to exclude the master data from a mysql dump? From the manual it seems like the only options are commented out or present. I don't need the data at all because it is going to a static system. Commented out will work but just wondering if
0
or some other value would make it not present?
>Use this option to dump a master replication server to produce a dump file that can be used to set up another server as a slave of the master. It causes the dump output to include a CHANGE MASTER TO statement that indicates the binary log coordinates (file name and position) of the dumped server. These are the master server coordinates from which the slave should start replicating after you load the dump file into the slave.
>If the option value is 2, the CHANGE MASTER TO statement is written as an SQL comment, and thus is informative only; it has no effect when the dump file is reloaded. If the option value is 1, the statement is not written as a comment and takes effect when the dump file is reloaded. If no option value is specified, the default value is 1.
-https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html#option_mysqldump_master-data
My plan was to use:
mysqldump --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=0 ...
but from the above entry that would put the CHANGE MASTER TO
as an uncommented command.
user3783243
(157 rep)
Feb 28, 2020, 04:59 PM
• Last activity: Aug 6, 2025, 08:08 AM
1
votes
1
answers
1044
views
How to install keyring_aws plugin in MySQL?
I am looking for a way to install keyring_aws plugin in mysql community edition. Is it possible? I gone through couple of docs, > https://dev.mysql.com/doc/refman/5.7/en/keyring-aws-plugin.html > https://dev.mysql.com/doc/refman/5.7/en/keyring-installation.html And also tried to findout keyring_aws....
I am looking for a way to install keyring_aws plugin in mysql community edition.
Is it possible?
I gone through couple of docs,
> https://dev.mysql.com/doc/refman/5.7/en/keyring-aws-plugin.html
> https://dev.mysql.com/doc/refman/5.7/en/keyring-installation.html
And also tried to findout keyring_aws.so file, but couldn't find it anywhere.
I checked in mysql dir also.
root@server:~# ls -l /usr/lib/mysql/plugin/
total 644
-rw-r--r-- 1 root root 21224 Jan 22 17:26 adt_null.so
-rw-r--r-- 1 root root 6288 Jan 22 17:26 auth_socket.so
-rw-r--r-- 1 root root 44144 Jan 22 17:26 connection_control.so
-rw-r--r-- 1 root root 108696 Jan 22 17:26 innodb_engine.so
-rw-r--r-- 1 root root 88608 Jan 22 17:26 keyring_file.so
-rw-r--r-- 1 root root 154592 Jan 22 17:26 libmemcached.so
-rw-r--r-- 1 root root 9848 Jan 22 17:26 locking_service.so
-rw-r--r-- 1 root root 10840 Jan 22 17:26 mypluglib.so
-rw-r--r-- 1 root root 6288 Jan 22 17:26 mysql_no_login.so
-rw-r--r-- 1 root root 56064 Jan 22 17:26 rewriter.so
-rw-r--r-- 1 root root 56936 Jan 22 17:26 semisync_master.so
-rw-r--r-- 1 root root 14768 Jan 22 17:26 semisync_slave.so
-rw-r--r-- 1 root root 27568 Jan 22 17:26 validate_password.so
-rw-r--r-- 1 root root 31296 Jan 22 17:26 version_token.so
Can we build this .so file from scratch? If yes, where can I find the source?
karthikeayan
(193 rep)
Feb 28, 2019, 12:33 PM
• Last activity: Aug 6, 2025, 02:04 AM
0
votes
1
answers
138
views
How to configure mysql group replication with different ports?
I have a k8s cluster and each `mysql` instance is well connected together! But I have another `mysql` server outside the cluster! Each `mysql` inside the cluster has a default `mysql` port, which is `3306` and an external port; which is something random! So when I start `mysql` group replication wit...
I have a k8s cluster and each
mysql
instance is well connected together! But I have another mysql
server outside the cluster! Each mysql
inside the cluster has a default mysql
port, which is 3306
and an external port; which is something random! So when I start mysql
group replication with the instances that are inside the cluster everything works fine!
the thing is the mysql
instance that is outside of the cluster is trying to connect to the 3306
default port with repl
user but it should be connecting to the random port generated! and I don't know how to specify it to connect to the port I want to connect...
**how can i specify the outsider instance to use that random generated port to connect to other instances inside the cluster to use mysql group replication?**
here is my error log:
error connecting to master 'repl@db1-headless:3306'
Hasan Parasteh
(103 rep)
Aug 13, 2022, 02:31 PM
• Last activity: Aug 6, 2025, 01:03 AM
0
votes
1
answers
528
views
Insert into table select - Replication lag - Percona Server 5.6
I have two MySQL instances (Percona server 5.6.31) in Master-Slave replication setup. I have set the below configuration: 1. ROW based replication is set. 2. Transaction Isolation is set to read-committed. Today, there was a insert going on in my Master. It was in the format INSERT INTO table1 SELEC...
I have two MySQL instances (Percona server 5.6.31) in Master-Slave replication setup.
I have set the below configuration:
1. ROW based replication is set.
2. Transaction Isolation is set to read-committed.
Today, there was a insert going on in my Master. It was in the format
INSERT INTO table1 SELECT * FROM table2
Table 2 has 200 million rows.
Though the number of insert records was only 5000 but the operation lasted for 30 mins. I observed replication lag during the insert operation.
I have load infile disabled due to security concerns. Hence I can't insert using that as well.
I went this article from Percona which says that this can be resolved if txn isolation is used as ROW and versions above 5.1 that this is resolved.
1. In what way I can make my slave to be in sync with Master in such conditions?
2. Why does the slave lag here?
tesla747
(1910 rep)
Dec 28, 2016, 04:08 PM
• Last activity: Aug 6, 2025, 12:02 AM
0
votes
1
answers
1374
views
mysql (mariadb) won't start after power outage (MacOS)
/var/log/system.log says (over and over) ``` Oct 13 19:34:01 Data-Server com.apple.xpc.launchd[1] (com.mariadb.server[2128]): Service exited with abnormal code: 1 Oct 13 19:34:01 Data-Server com.apple.xpc.launchd[1] (com.mariadb.server): Service only ran for 0 seconds. Pushing respawn out by 10 seco...
/var/log/system.log says (over and over)
Oct 13 19:34:01 Data-Server com.apple.xpc.launchd (com.mariadb.server): Service exited with abnormal code: 1
Oct 13 19:34:01 Data-Server com.apple.xpc.launchd (com.mariadb.server): Service only ran for 0 seconds. Pushing respawn out by 10 seconds.
/usr/local/var/mysql/Data-Server.local.err says (once recently, repeated a number of times well before the crash)
2020-10-13 2:44:25 20019181 [Warning] Aborted connection 20019181 to db: 'EcoReality' user: 'root' host: '10.1.2.2' (Got timeout reading communication packets)
First thing I did was to shutdown the launchctl entry, to keep it from constantly restarting.
# launchctl unload /Library/LaunchDaemons/com.mariadb.server.plist
Then I tried invoking mysqld manually:
# sudo /usr/local/bin/mysqld -u mysql
2020-10-13 20:46:09 0 [Note] /usr/local/bin/mysqld (mysqld 10.4.6-MariaDB) starting as process 2364 ...
2020-10-13 20:46:09 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-10-13 20:46:09 0 [Note] InnoDB: Uses event mutexes
2020-10-13 20:46:09 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2020-10-13 20:46:09 0 [Note] InnoDB: Number of pools: 1
2020-10-13 20:46:09 0 [Note] InnoDB: Using SSE2 crc32 instructions
2020-10-13 20:46:09 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2020-10-13 20:46:09 0 [Note] InnoDB: Completed initialization of buffer pool
2020-10-13 20:46:09 0 [ERROR] InnoDB: Invalid log block checksum. block: 81635496 checkpoint no: 2609153 expected: 296846624 found: 3735928559
2020-10-13 20:46:09 0 [ERROR] InnoDB: Missing MLOG_CHECKPOINT at 41797373564 between the checkpoint 41797373564 and the end 41797373440.
2020-10-13 20:46:09 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
2020-10-13 20:46:09 0 [Note] InnoDB: Starting shutdown...
2020-10-13 20:46:09 0 [ERROR] Plugin 'InnoDB' init function returned error.
2020-10-13 20:46:09 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2020-10-13 20:46:09 0 [Note] Plugin 'FEEDBACK' is disabled.
2020-10-13 20:46:09 0 [Note] CONNECT: Version 1.06.0009 January 27, 2019
2020-10-13 20:46:09 0 [ERROR] Unknown/unsupported storage engine: InnoDB
2020-10-13 20:46:09 0 [ERROR] Aborting
So now I'm a bit stumped at the lack of diagnostic messages. Is there any way to coax more info out of mysqld when it goes down? Or should I just start incrementing innodb_force_recovery
until something interesting happens?
Jan Steinman
(191 rep)
Oct 14, 2020, 04:01 AM
• Last activity: Aug 5, 2025, 07:01 PM
0
votes
0
answers
15
views
AWS Aurora MySQL table archive running slow for one table
I'm working on archiving a bunch of tables in an environment where archiving was never done, with some data going back 10 years. I've written a script to perform the work, which loops through the primary key (an autoincrement `bigint`) *n* rows at a time, calling a procedure to archive the data to a...
I'm working on archiving a bunch of tables in an environment where archiving was never done, with some data going back 10 years. I've written a script to perform the work, which loops through the primary key (an autoincrement
bigint
) *n* rows at a time, calling a procedure to archive the data to a separate table and then deleting that same data from the main table. I'm doing it in small batches to prevent any long term locking of the main tables. It also sleep
s in between each loop iteration. Batch size and sleep time are configurable via a config file. On my test system, for this table, I'm using a batch size of 1000 and a sleep time of 0. Instance class is r7g.4xl.
Most tables archive at several thousand rows per second, which is acceptable. But I have one table whose archiving is going very slowly; averaging under 550 rows/sec. There is no other activity in the database (there are other archives running against other DBs in the cluster at the same time, but killing them didn't improve the performance of this one). Here's the table schema (the schema for the archive table is identical):
CREATE TABLE inbox_item
(
id
bigint NOT NULL AUTO_INCREMENT,
user_id
bigint NOT NULL,
template_id
bigint NOT NULL,
url
varchar(4000) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
created_at
datetime NOT NULL,
hash
varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
parameters
varchar(4000) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
PRIMARY KEY (id
),
UNIQUE KEY hash_uidx
(hash
),
KEY template_id_idx
(template_id
),
KEY user_id_created_at_idx
(user_id
,created_at
)
) ENGINE=InnoDB AUTO_INCREMENT=442872663 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Note that while there are two large varchar
s, total actual data width is under 300 bytes. Here's the procedure that's being called:
CREATE DEFINER=root
@%
PROCEDURE archive_inbox_item_proc
(IN pkmin bigint, IN pkmax bigint, IN querymax bigint)
begin
declare exit handler for sqlexception
begin
get diagnostics condition 1
@err = MYSQL_ERRNO, @msg = MESSAGE_TEXT;
select -1;
select concat('Error ', cast(@err as char), ': ',@msg) 'Error';
rollback;
end;
start transaction;
insert ignore into inbox
.inbox_item_archive
select arctable.* from inbox
.inbox_item
as arctable where created_at = pkmin and arctable.id
= pkmin and arctable.id
< querymax and arctable.id
<= pkmax ;
select row_count();
commit;
end
pkmin
is always the actual minimum pkey value. There are no foreign keys or triggers referencing the table. Here's the table status:
Name: inbox_item
Engine: InnoDB
Version: 10
Row_format: Dynamic
Rows: 299879061
Avg_row_length: 243
Data_length: 72988737536
Max_data_length: 0
Index_length: 126937300992
Data_free: 45770342400
Auto_increment: 442872663
Create_time: 2025-03-28 06:15:36
Update_time: 2025-08-05 18:04:55
Check_time: NULL
Collation: utf8mb4_unicode_ci
Checksum: NULL
Create_options:
Comment:
Any ideas on what's causing this to run so slow relative to other tables in other databases?
Swechsler
(153 rep)
Aug 5, 2025, 06:05 PM
0
votes
1
answers
1705
views
MySQL cursor always exits out of loop
The cursor query and `select value` query returns rows if I run it in `mysql` but when in a cursor it always exits out of loop. Anything wrong here? I've added "BEFORE LOOP", "EXIT" and "IN LOOP" so it prints where it is but it always starts with `BEFORE LOOP` and then ends with `EXIT`. CREATE PROCE...
The cursor query and
select value
query returns rows if I run it in mysql
but when in a cursor it always exits out of loop.
Anything wrong here?
I've added "BEFORE LOOP", "EXIT" and "IN LOOP" so it prints where it is but it always starts with BEFORE LOOP
and then ends with EXIT
.
CREATE PROCEDURE getTotal()
BEGIN
DECLARE HOSTID INTEGER;
DECLARE cITEMID INT;
declare finished bool default false;
DECLARE Total INT;
declare cur1 cursor for SELECT itemid FROM items WHERE hostid = 10579;
declare continue handler for not found set finished = true;
open cur1;
loop_1: loop
fetch cur1 into cITEMID;
SELECT "BEFORE LOOP";
if finished then
SELECT "EXIT";
leave loop_1;
end if;
SELECT "IN LOOP";
-- Test query
SELECT value from history_uint WHERE itemid = cITEMID ORDER BY itemid DESC LIMIT 1;
-- Final select query will look like this.
-- SET @Total := @Total + (SELECT value from history_uint WHERE itemid = cITEMID ORDER BY itemid DESC LIMIT 1);
-- SELECT @Total;
end loop;
close cur1;
END //
DELIMITER ;
Queries:
SELECT itemid FROM items WHERE hostid = 10579;
| itemid |
| 12345 |
| 12346 |
| 12347 |
SELECT value from history_uint WHERE itemid = 12345 ORDER BY itemid DESC LIMIT 1;
| value |
| 1 |
SELECT * from history_uint;
| itemid | value | clock (unixtimestamp) |
| 12345 | 13 | 4364564654654 |
| 12346 | 1 | 4364564654657 |
| 12347 | 16 | 4364564654654 |
| 12345 | 13 | 4364564654756 |
| 12346 | 2 | 4364564654753 |
| 12347 | 15 | 4364564654756 |
Note: The clock column value is just made up.
R0bert2
(121 rep)
Apr 1, 2020, 07:06 PM
• Last activity: Aug 5, 2025, 06:00 PM
0
votes
2
answers
140
views
Asking Suggestions regarding mysql optimization
Im running a dedicated server with around 20-25 sites, almost all of them running wordpress installations. Running it over a cpanel setup. For a while its mostly mysql eating most of the cpu and hitting high load times ``` mysql 0 61.69(cpu) 6.92(ram) /usr/sbin/mysqld ``` **Server config is** ``` Up...
Im running a dedicated server with around 20-25 sites, almost all of them running wordpress installations.
Running it over a cpanel setup.
For a while its mostly mysql eating most of the cpu and hitting high load times
mysql 0 61.69(cpu) 6.92(ram) /usr/sbin/mysqld
**Server config is**
Uptime 70 days
Operating System CentOS Linux 7 (Core) x64 File
Handles 14560 of 6511967
Processes 342
CPU Model AMD Ryzen 5 3600 6-Core Processor
Ram 64GB
I am trying to improve this and came across mysql tuner, here is what it has to say after running mysql with performance schema on for 2 days or more.
This isnt exactly my forte so the mycnf is just a blend of what I tried with an older mysqltuner suggestion but ive heard that the application would still require a human touch.
Would appreciate some help in optimizing the settings.
>> MySQLTuner 1.7.19 - Major Hayden
>> Bug reports, feature requests, and downloads at http://mysqltuner.com/
>> Run with '--help' for additional options and output filtering
[--] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 10.3.27-MariaDB-log
[OK] Operating on 64-bit architecture
-------- Log file Recommendations ------------------------------------------------------------------
[OK] Log file /var/lib/mysql/server.name.here.err exists
[--] Log file: /var/lib/mysql/server.name.here.err(4M)
[OK] Log file /var/lib/mysql/server.name.here.err is readable.
[OK] Log file /var/lib/mysql/server.name.here.err is not empty
[OK] Log file /var/lib/mysql/server.name.here.err is smaller than 32 Mb
[!!] /var/lib/mysql/server.name.here.err contains 31430 warning(s).
[!!] /var/lib/mysql/server.name.here.err contains 23132 error(s).
[--] 60 start(s) detected in /var/lib/mysql/server.name.here.err
[--] 1) 2020-12-07 7:35:16 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 2) 2020-12-01 14:35:35 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 3) 2020-11-30 16:10:14 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 4) 2020-11-30 16:07:53 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 5) 2020-11-01 1:57:12 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 6) 2020-10-10 19:28:45 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 7) 2020-10-10 19:28:32 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 8) 2020-09-30 3:36:14 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 9) 2020-09-28 17:58:16 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 10) 2020-09-25 18:38:33 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 51 shutdown(s) detected in /var/lib/mysql/server.name.here.err
[--] 1) 2020-12-07 7:35:07 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 2) 2020-12-01 14:35:27 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 3) 2020-11-30 16:09:53 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 4) 2020-11-30 16:07:33 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 5) 2020-11-01 1:57:09 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 6) 2020-10-10 19:28:39 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 7) 2020-10-10 19:28:26 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 8) 2020-09-30 3:34:34 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 9) 2020-09-28 17:56:38 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 10) 2020-09-25 18:36:55 0 [Note] /usr/sbin/mysqld: Shutdown complete
-------- Storage Engine Statistics -----------------------------------------------------------------
[--] Status: +Aria +CSV +InnoDB +MEMORY +MRG_MyISAM +MyISAM +PERFORMANCE_SCHEMA +SEQUENCE
[--] Data in MyISAM tables: 2.1G (Tables: 1387)
[--] Data in InnoDB tables: 3.2G (Tables: 2207)
[--] Data in MEMORY tables: 586.4K (Tables: 3)
[OK] Total fragmented tables: 0
-------- Analysis Performance Metrics --------------------------------------------------------------
[--] innodb_stats_on_metadata: OFF
[OK] No stat updates during querying INFORMATION_SCHEMA.
-------- Security Recommendations ------------------------------------------------------------------
[OK] There are no anonymous accounts for any database users
[OK] All database users have passwords assigned
[!!] There is no basic password file list!
-------- CVE Security Recommendations --------------------------------------------------------------
[--] Skipped due to --cvefile option undefined
-------- Performance Metrics -----------------------------------------------------------------------
[--] Up for: 2d 5h 57m 24s (167M q [862.613 qps], 2M conn, TX: 21024G, RX: 379G)
[--] Reads / Writes: 98% / 2%
[--] Binary logging is disabled
[--] Physical Memory : 62.8G
[--] Max MySQL memory : 43.5G
[--] Other process memory: 0B
[--] Total buffers: 5.0G global + 260.7M per thread (151 max threads)
[--] P_S Max memory usage: 104M
[--] Galera GCache Max memory usage: 0B
[OK] Maximum reached memory usage: 26.5G (42.17% of installed RAM)
[OK] Maximum possible memory usage: 43.5G (69.35% of installed RAM)
[OK] Overall possible memory usage with other process is compatible with memory available
[OK] Slow queries: 0% (324K/167M)
[OK] Highest usage of available connections: 55% (84/151)
[OK] Aborted connections: 0.00% (55/2878495)
[!!] name resolution is active : a reverse name resolution is made for each new connection and can reduce performance
[!!] Query cache may be disabled by default due to mutex contention.
[OK] Query cache efficiency: 40.4% (102M cached / 254M selects)
[!!] Query cache prunes per day: 3479297
[OK] Sorts requiring temporary tables: 0% (11K temp sorts / 6M sorts)
[!!] Joins performed without indexes: 12813
[!!] Temporary tables created on disk: 66% (2M on disk / 3M total)
[OK] Thread cache hit rate: 98% (40K created / 2M connections)
[OK] Table cache hit rate: 95% (4K open / 4K opened)
[OK] table_definition_cache(2097152) is upper than number of tables(3862)
[OK] Open file limit used: 7% (2K/40K)
[OK] Table locks acquired immediately: 99% (13M immediate / 13M locks)
-------- Performance schema ------------------------------------------------------------------------
[--] Memory used by P_S: 104.0M
[--] Sys schema is installed.
-------- ThreadPool Metrics ------------------------------------------------------------------------
[--] ThreadPool stat is enabled.
[--] Thread Pool Size: 12 thread(s).
[--] Using default value is good enough for your version (10.3.27-MariaDB-log)
-------- MyISAM Metrics ----------------------------------------------------------------------------
[!!] Key buffer used: 24.9% (182M used / 734M cache)
[OK] Key buffer size / total MyISAM indexes: 700.0M/460.2M
[OK] Read Key buffer hit rate: 100.0% (1B cached / 41K reads)
[!!] Write Key buffer hit rate: 69.5% (804K cached / 558K writes)
-------- InnoDB Metrics ----------------------------------------------------------------------------
[--] InnoDB is enabled.
[--] InnoDB Thread Concurrency: 0
[OK] InnoDB File per table is activated
[OK] InnoDB buffer pool / data size: 4.0G/3.2G
[!!] Ratio InnoDB log file size / InnoDB Buffer pool size (0.78125 %): 16.0M * 2/4.0G should be equal to 25%
[!!] InnoDB buffer pool instances: 8
[--] Number of InnoDB Buffer Pool Chunk : 32 for 8 Buffer Pool Instance(s)
[OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances
[OK] InnoDB Read buffer efficiency: 100.00% (102924116296 hits/ 102924220151 total)
[!!] InnoDB Write Log efficiency: 73.75% (4679039 hits/ 6344450 total)
[OK] InnoDB log waits: 0.00% (0 waits / 1665411 writes)
-------- AriaDB Metrics ----------------------------------------------------------------------------
[--] AriaDB is enabled.
[OK] Aria pagecache size / total Aria indexes: 128.0M/1B
[OK] Aria pagecache hit rate: 99.1% (277M cached / 2M reads)
-------- TokuDB Metrics ----------------------------------------------------------------------------
[--] TokuDB is disabled.
-------- XtraDB Metrics ----------------------------------------------------------------------------
[--] XtraDB is disabled.
-------- Galera Metrics ----------------------------------------------------------------------------
[--] Galera is disabled.
-------- Replication Metrics -----------------------------------------------------------------------
[--] Galera Synchronous replication: NO
[--] No replication slave(s) for this server.
[--] Binlog format: MIXED
[--] XA support enabled: ON
[--] Semi synchronous replication Master: OFF
[--] Semi synchronous replication Slave: OFF
[--] This is a standalone server
-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
Control warning line(s) into /var/lib/mysql/server.name.here.err file
Control error line(s) into /var/lib/mysql/server.name.here.err file
Configure your accounts with ip or subnets only, then update your configuration with skip-name-resolve=1
Increasing the query_cache size over 128M may reduce performance
We will suggest raising the 'join_buffer_size' until JOINs not using indexes are found.
See https://dev.mysql.com/doc/internals/en/join-buffer-size.html
(specially the conclusions at the bottom of the page).
When making adjustments, make tmp_table_size/max_heap_table_size equal
Reduce your SELECT DISTINCT queries which have no LIMIT clause
Before changing innodb_log_file_size and/or innodb_log_files_in_group read this: https://bit.ly/2TcGgtU
Variables to adjust:
query_cache_size (=0)
query_cache_type (=0)
query_cache_size (> 128M) [see warning above]
join_buffer_size (> 2.0M, or always use indexes with JOINs)
tmp_table_size (> 32M)
max_heap_table_size (> 32M)
innodb_log_file_size should be (=512M) if possible, so InnoDB total log files size equals to 25% of buffer pool size.
innodb_buffer_pool_instances(=4)
**And here is what my.cnf currently has**
**Note:** This is now edited based on suggestions by Wilson and Rick, Will save and restart mysql and get back to you guys with updates.
[client-server]
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
[mysqld]
# Logging and performance measurement here
log-error=/var/lib/mysql/ryzen.dogestream.com.err
# for enhanced slow query log
log_slow_verbosity=query_plan,explain
performance-schema=1
max_allowed_packet=268435456
max_heap_table=32M
tmp_table_size=32M
open_files_limit=40000
# Buffer sizes
join_buffer_size=2M
key_buffer_size=700M
sort_buffer_size=2M
# InnoDB stuff goes here
innodb_file_per_table=1
innodb_buffer_pool_size=15G
innodb_log_file_size=16M
local-infile=0
# from 1024 to conserve 90% of CPU cycles used for function
innodb_lru_scan_depth=100
# should always match table_open_cache
innodb_open_files=9000
# Query stuff goes here
# from 128M to conserve RAM for more useful purposes
query_cache_size=0
# from 1 for OFF to avoid 3 million+ query cache prunes per day
query_cache_type=0
# from 2M to conserve RAM
query_cache_limit=0
slow_query_log=1
slow_query_log_file=mysql-slow.log
long_query_time=0.1
# It says cache it is here
table_definition_cache=-1
# from 3000 to reduce tables opened_tables count
table_open_cache=9000
# from 16 to accomodate your 84 concurrent users
thread_cache_size=256
Sawada Tsunayoshi
(1 rep)
Dec 9, 2020, 12:52 PM
• Last activity: Aug 5, 2025, 03:06 PM
0
votes
2
answers
139
views
Unable to Add New Database in CloudPanel VPS - Access denied for user 'root'@'localhost'
**Problem** I'm facing an issue with my VPS running CloudPanel where I cannot add new databases due to MySQL root access being denied. **Environment** - CloudPanel on VPS - MySQL Server - Hosting Provider: Hostinger **Error Message** `Access denied for user 'root'@'localhost'` **Steps I've Tried** *...
**Problem**
I'm facing an issue with my VPS running CloudPanel where I cannot add new databases due to MySQL root access being denied.
**Environment**
- CloudPanel on VPS
- MySQL Server
- Hosting Provider: Hostinger
**Error Message**
Access denied for user 'root'@'localhost'
**Steps I've Tried**
**1. Attempted direct MySQL login:**
mysql -u root -p
Result: Access denied error, tried multiple password combinations including:
- Root user password
- Hostinger password
- Default passwords like (12345, admin, root, etc..)
- No password
**2. Attempted MySQL safe mode:**
sudo systemctl stop mysql
sudo mysqld_safe --skip-grant-tables &
Result: Error message:
MySQL cannot start in safe mode because the required directory for the UNIX socket file does not exist.
**3. Attempted to kill running processes:**
sudo kill -9
Result: No process found with specified PID
**4. Created missing directory and set permissions:**
sudo mkdir -p /var/run/mysqld
sudo chown -R mysql:mysql /var/run/mysqld
**5. Restarted MySQL in safe mode and verified process:**
sudo mysqld_safe --skip-grant-tables &
ps aux | grep mysql
Result: Process found running
**6. Attempted passwordless root login:**
mysql -u root
Result: Still encountering access denied error
**Additional Context**
- Hostinger support was unable to resolve the issue
- CloudPanel tutorials show database creation through UI without command-line intervention
- Suspect issue might be related to CloudPanel's default MySQL user restrictions
*I have reached out to Hostinger support, but they were unable to provide a solution beyond what I have already attempted. Additionally, all the YouTube tutorials on setting up and creating a database in CloudPanel show a straightforward process with no access issues, as everything can be done directly from the UI without requiring any command-line input.
I suspect it might be related to CloudPanel’s default MySQL user restrictions or authentication settings, but I’m not sure how to proceed. Has anyone encountered a similar issue or knows how to regain root access?*
Joe
(1 rep)
Feb 2, 2025, 10:31 AM
• Last activity: Aug 5, 2025, 02:10 PM
0
votes
1
answers
1541
views
MySQL FULLTEXT Match exponentially slower with search phrase
I'm trying to track down some performance issues in my FULLTEXT MATCH() AGAINST() query, and it seems to be related (at least in part) to queries requiring an exact phrase. My table stores system process information, has roughly 2M rows, a `keywords` TEXT column, and a FULLTEXT index on said column....
I'm trying to track down some performance issues in my FULLTEXT MATCH() AGAINST() query, and it seems to be related (at least in part) to queries requiring an exact phrase.
My table stores system process information, has roughly 2M rows, a
keywords
TEXT column, and a FULLTEXT index on said column. The keywords column averages ~600 characters, with several "words" in that column containing one or many special characters.
The query I'm trying to run is an exact match on "net.exe"
:
`select id from processes where id "
>
> A phrase that is enclosed within double quote (") characters matches
> only rows that contain the phrase literally, as it was typed. The
> full-text engine splits the phrase into words and performs a search in
> the FULLTEXT index for the words. Nonword characters need not be
> matched exactly: Phrase searching requires only that matches contain
> exactly the same words as the phrase and in the same order. For
> example, "test phrase" matches "test, phrase".
...and indeed, running queries with ... AGAINST('"net exe"' ...
take just as long. So it seems to just be searching for exact phrases in general.
My latest theory is that because my table has process info (e.g. system paths and cmdline arguments, which have many special characters), the normal FULLTEXT isn't useful for my query and MySQL is effectively re-indexing the whole table when I search for phrases like "net.exe"
. Some supporting evidence for this is that the original creation of the FULLTEXT index took roughly 30 minutes. However I find it hard to believe that would be the full explanation. Regardless, I ran explain
on my query (which itself took 30 minutes to resolve), and got the following:
```
mysql> explain select id from processes where id explain select id from processes where id Ft_hints: sorted
, which seems to only be due to the lack of quotes. If I run explain
when querying for "net"
it goes back to Ft_hints: no_ranking
.
Lastly, I tried running CHECK TABLE
and even making a fresh temp table with only the id and keywords columns, but the above numbers were consistent, so I don't feel this is specific to my table condition.
Eric
(103 rep)
Oct 4, 2022, 11:51 PM
• Last activity: Aug 5, 2025, 11:03 AM
0
votes
1
answers
559
views
MySQL query - Passing variable into subquery's derived table
Is there a way to pass a variable into a subquery, and use that variable in a derived table in the subquery? I am trying identify "active" organizations, and there are multiple actions that constitute "activity". One is creating an invoice, represented here (others not shown here for the sake of rea...
Is there a way to pass a variable into a subquery, and use that variable in a derived table in the subquery?
I am trying identify "active" organizations, and there are multiple actions that constitute "activity". One is creating an invoice, represented here (others not shown here for the sake of readability). However, because there are more types of actions, I cannot do the simpler approach of just querying the
Invoice
table, like in thisWorks
column.
Therefore, I am trying to create a derived table, where multiple sources can be queried, but also filtering the @start
and @end
timestamps, as seen in the thisDoesntWork
column.
Here is a simplified version of what the query looks like:
select
@row:=@row+1 as row,
@sixPrior:=date_sub(date_sub(date_sub(curdate(), interval dayofweek(curdate())-2 day), interval @row-1 week), interval 6 month) as sixMonthsPrior,
@start:=date_sub(date_sub(curdate(), interval dayofweek(curdate())-2 day), interval @row-1 week) as start,
@end:=date_sub(date_add(curdate(), interval dayofweek(curdate())+5 day), interval @row-1 week) as end,
@sixPrior as sixCheck,
@start as startCheck,
@end as endCheck,
( select count(distinct(organizationId))
from Invoice
where createdTime between @start and @end
) as thisWorks,
(
select count(distinct(organizationId))
from (
select organizationId
from Invoice
where createdTime between @start and @end
) d
) as thisDoesntWork
from Organization o
join (select @row:=0) r
limit 10;
In the example above, thisDoesntWork
, has the same value for every row (this first value). I assume it's got something to do with @start
and @end
never getting updated values.
Is there any way to filter a derived table, with values that change each row, from an outer table?
I'm also open to alternative approaches, if derived table restrictions will never allow it to work.
I'm also using MySql 5.7, so LATERAL JOIN
is not a possibility
Jared Anderton
(101 rep)
Dec 1, 2020, 06:05 PM
• Last activity: Aug 5, 2025, 10:03 AM
0
votes
2
answers
4242
views
Access denied when connecting to mongosqld with MySQL
I'm trying to connect to my MongoDB using MySQL through mongosqld. I have mongosqld running with a config file that looks like security: enabled: true mongodb: net: uri: test-db auth: username: usertest password: pass source: admin schema: path: schema.drdl I have it hosted on a [mongo ODBC manager]...
I'm trying to connect to my MongoDB using MySQL through mongosqld. I have mongosqld running with a config file that looks like
security:
enabled: true
mongodb:
net:
uri: test-db
auth:
username: usertest
password: pass
source: admin
schema:
path: schema.drdl
I have it hosted on a [mongo ODBC manager](https://github.com/mongodb/mongo-odbc-driver/releases/) with
SERVER=127.0.0.1
, PORT=3071
, DATABASE=test_db
, UID=usertest?source=admin
, PWD=pass
. I am able to connect to and query this MongoDB through Excel using [Mongo's tutorial for that](https://docs.mongodb.com/bi-connector/current/connect/excel/) , but I am not able to do the same with MySQL using [Mongo's tutorial](https://docs.mongodb.com/bi-connector/current/connect/mysql/) . When I try to connect from terminal with mysql 'user=usertest?source=admin' --default-auth=mongosql_auth -p
I get ERROR 1045 (28000): Access denied for user 'usertest' and
handshake error: unable to saslStart conversation 0: Authentication failed.` from the mongosqld side. I am doing this on macOS. What could be causing this problem only for trying to connect from MySQL?
TheStrangeQuark
(101 rep)
Oct 8, 2018, 04:09 PM
• Last activity: Aug 5, 2025, 04:04 AM
0
votes
1
answers
1425
views
How to update mysql 5.7 to 8.0 Amazon linux 1, ec2
I'm using MySQL 8.0.14 for my project and I want to use it for my AWS ec2 server. I updated from 5.5 to 5.7 by following this link: https://stackoverflow.com/questions/37027744/upgrade-mysql-to-5-6-on-ec2 But don't have any information to update to MySQL 8.0
I'm using MySQL 8.0.14 for my project and I want to use it for my AWS ec2 server.
I updated from 5.5 to 5.7 by following this link: https://stackoverflow.com/questions/37027744/upgrade-mysql-to-5-6-on-ec2
But don't have any information to update to MySQL 8.0
An Dương
Jul 4, 2020, 10:52 AM
• Last activity: Aug 5, 2025, 03:08 AM
1
votes
1
answers
819
views
MySQLdump leads to exceeding max connections. Skip-Quick as a solution?
Every night I do a full mysqldump of a 17G DB in MySQL 5.7.32. This worke for years, now I am doing some heavy load on inserts during the night which caused at exactly the backup time the connections to rise to max_connections which led to connections errors. As the Server has enough RAM 64G (30G fr...
Every night I do a full mysqldump of a 17G DB in MySQL 5.7.32. This worke for years, now I am doing some heavy load on inserts during the night which caused at exactly the backup time the connections to rise to max_connections which led to connections errors.
As the Server has enough RAM 64G (30G free), I increased max_connections from 150 to 300 in a first reaction.
However looking at the dump command I found the option -- quick (also enabled by default) which tells me that it is exporting row by row.
--single-transaction --routines --quick --compact
I am thinking of changing this to
--skip-quick.
but dare to change this since I would need to check the restore again and this is very time consuming.
Looking at the connections over time I also noticed that there are some interruptions around that time period. So maybe connections stack up since there is a block during MySQLdump?
MySQL error log shows a large amount of the following error, although not at this time points but continuously throughout the day:
Aborted connection 63182018 to db: 'mydb' user: 'test' host: 'myhost' (Got an error reading communication packets)
How would you approach this problem?



merlin
(323 rep)
Dec 27, 2020, 08:08 AM
• Last activity: Aug 5, 2025, 02:05 AM
0
votes
1
answers
375
views
How to pass, specific day month and year in the below query?
I have registration according to the data in a table and I want to show the whole month, I found the easy solution with below query but I do not know how can I pass specific day, month and year in below query? SELECT AAA.date_field, IFNULL(BBB.val,0) val FROM ( SELECT date_field FROM ( SELECT MAKEDA...
I have registration according to the data in a table and I want to show the whole month, I found the easy solution with below query but I do not know how can I pass specific day, month and year in below query?
SELECT AAA.date_field,
IFNULL(BBB.val,0) val
FROM
(
SELECT date_field
FROM
(
SELECT MAKEDATE(YEAR('2014-02-06'),1)
+ INTERVAL (MONTH('2014-02-06')-1) MONTH
+ INTERVAL daynum DAY date_field
FROM
(
SELECT t*10+u daynum FROM
(SELECT 0 t UNION SELECT 1 UNION SELECT 2 UNION SELECT 3) A,
(SELECT 0 u UNION SELECT 1 UNION SELECT 2 UNION SELECT 3
UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7
UNION SELECT 8 UNION SELECT 9) B ORDER BY daynum
) AA
) AA WHERE MONTH(date_field) = MONTH('2014-02-06')
) AAA LEFT JOIN (SELECT date_field,val FROM school) BBB USING (date_field)
Can you please help me how can I pass the day month and year in the above query?
Prashant Barve
(101 rep)
Aug 6, 2016, 11:02 AM
• Last activity: Aug 4, 2025, 03:05 PM
0
votes
2
answers
534
views
MySQL subquery with WHERE filters not working as expected
I have a question/tagging system like StackExchange. I want to display the tags associated with each question and also show the total number of times the tag is used, unless the question has been taken offline or is suspended, then it is not included in the total number count. Tables are: QUESTIONS...
I have a question/tagging system like StackExchange. I want to display the tags associated with each question and also show the total number of times the tag is used, unless the question has been taken offline or is suspended, then it is not included in the total number count.
Tables are:
QUESTIONS - includes fields: suspended and offline
TAGS - includes fields: tag_id and tag_name
TAGS_X - includes fields: tag_id, question_id
The query below almost works, but the subquery seems to return the total count of times the tag is used and the filtering on the suspended and offline fields is not functioning as I intended (does not seem to filter on those conditions).
SELECT tags_x.tag_id, tags.tag_name, tags_x.question_id, questions.suspended, questions.offline,
(select count(tags_x.tag_id) from tags_x WHERE tags_x.tag_id=tags.tag_id AND questions.suspended = 0 AND questions.offline = 0) num
from tags_x
LEFT JOIN tags ON tags.tag_id = tags_x.tag_id
LEFT JOIN questions ON questions.question_id = tags_x.question_id
WHERE questions.suspended = 0 AND questions.offline = 0
Below shows a typical result. Tag 'a' actually shows up in 21 rows. One row is filtered where the offline value is 1. The subquery count is returning 22, but I really want it to show the filtered result of 21. Seems like the WHERE/(offline, suspended) filters in the subquery are incorrectly applied.
Can you help me out to determine the correct way to do this?

HDer
(101 rep)
Jul 24, 2020, 03:08 AM
• Last activity: Aug 4, 2025, 12:02 PM
0
votes
2
answers
155
views
mysql query performance... got to be a better way
I have a query right now that for a lack of a better word, sucks. I'm looking for some help with respect to other ways to approach writing this query. Here are the details. I've got a temp table build up with the following columns: - `AssignmentSubType` - `AssignmentValue` - `OwnerUsergroupID` - `As...
I have a query right now that for a lack of a better word, sucks. I'm looking for some help with respect to other ways to approach writing this query. Here are the details.
I've got a temp table build up with the following columns:
-
AssignmentSubType
- AssignmentValue
- OwnerUsergroupID
- AssociatedObjectID
The basic gist of what I want to do is to find AssociatedObjectID
s that have the same AssignmentSubType
, AssignmentValue
, and OwnerUsergroupID
. This is because there is a ton of SQL down the road that really doesn't need to be run if I've got "duplicates" (same AssignmentSubType
, AssignmentValue
, and OwnerUsergroupID
). Once I've got my list of duplicates I run the next set of SQL stuff on one of the duplicates , when I've got back info from that I join back the rest of the duplicates for the final result set.
Example table data:
AssignmentSubType | AssignmentValue | OwnerUsergroupID | AssociatedObjectID
retailer | Dicks | 1 | 5
retailer | Dicks | 1 | 7
retailer | Dicks | 1 | 9
In this example I just want to do the calculations on associatedobjecid = 5 since 7 and 9 will be the exact same, and then join back in 7 and 9 at the 'end'
I've got the following query that works but is very inefficient
SELECT firstObject, f2.AssociatedObjectID
FROM (
SELECT firstObject, dupAss AS dups
FROM (
SELECT min(AssociatedObjectID) AS firstObject, group_concat(concat('-',AssociatedObjectID,'-')) AS dupAss
FROM (
SELECT * FROM tableFromAbove
) innerRes GROUP BY AssignmentSubType, AssignmentValue, OwnerUsergroupID
) outR
) outR2
LEFT JOIN tableFromAbove f2
ON outR2.dups LIKE concat('%-',f2.AssociatedObjectID,'-%') ORDER BY firstObject
this query will give me back a result set like the following
firstObject | AssociatedObjectID
5 | 7
5 | 9
like I said earlier in this post... I then use this result set to join back to the results from the rest of the queries for associatedobjectid = 5. Any thoughts on how I could restructure this so that it is more efficient?
Josh
(159 rep)
Apr 19, 2018, 02:57 PM
• Last activity: Aug 4, 2025, 08:04 AM
Showing page 1 of 20 total questions