Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
138 views
How to configure mysql group replication with different ports?
I have a k8s cluster and each `mysql` instance is well connected together! But I have another `mysql` server outside the cluster! Each `mysql` inside the cluster has a default `mysql` port, which is `3306` and an external port; which is something random! So when I start `mysql` group replication wit...
I have a k8s cluster and each mysql instance is well connected together! But I have another mysql server outside the cluster! Each mysql inside the cluster has a default mysql port, which is 3306 and an external port; which is something random! So when I start mysql group replication with the instances that are inside the cluster everything works fine! the thing is the mysql instance that is outside of the cluster is trying to connect to the 3306 default port with repl user but it should be connecting to the random port generated! and I don't know how to specify it to connect to the port I want to connect... **how can i specify the outsider instance to use that random generated port to connect to other instances inside the cluster to use mysql group replication?** here is my error log:
error connecting to master 'repl@db1-headless:3306'
Hasan Parasteh (103 rep)
Aug 13, 2022, 02:31 PM • Last activity: Aug 6, 2025, 01:03 AM
4 votes
1 answers
153 views
Postgresql: Why is a join or subquery so much slower than a literal in a where clause?
I'm using PostgreSQL 14.17. My database schema has two tables: ``` Table "public.log_records" Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description --------------+--------------------------+-----------+----------+------------------------------+----------...
I'm using PostgreSQL 14.17. My database schema has two tables:
Table "public.log_records"
    Column    |           Type           | Collation | Nullable |           Default            | Storage  | Compression | Stats target | Description
--------------+--------------------------+-----------+----------+------------------------------+----------+-------------+--------------+-------------
 id           | bigint                   |           | not null | generated always as identity | plain    |             |              |
 log_id       | bigint                   |           |          |                              | plain    |             |              |
 payload      | jsonb                    |           | not null |                              | extended |             |              |
 created_at   | timestamp with time zone |           |          | CURRENT_TIMESTAMP            | plain    |             |              |
Indexes:
    "log_records_pkey" PRIMARY KEY, btree (id)
    "idx_log_records_log_created" btree (log_id, created_at)
    "idx_log_records_log_id" btree (log_id)
Foreign-key constraints:
    "log_records_log_id_fkey" FOREIGN KEY (log_id) REFERENCES logs(id)
Access method: heap

      Column      |           Type           | Collation | Nullable |           Default            | Storage  | Compression | Stats target | Description
------------------+--------------------------+-----------+----------+------------------------------+----------+-------------+--------------+-------------
 id               | bigint                   |           | not null | generated always as identity | plain    |             |              |
 name             | character varying(255)   |           | not null |                              | extended |             |              |
 retention_period | integer                  |           | not null |                              | plain    |             |              |
 created_at       | timestamp with time zone |           |          | CURRENT_TIMESTAMP            | plain    |             |              |
Indexes:
    "logs_pkey" PRIMARY KEY, btree (id)
    "idx_logs_id_name" btree (id, name)
    "logs_name_key" UNIQUE CONSTRAINT, btree (name)
Referenced by:
    TABLE "log_records" CONSTRAINT "log_records_log_id_fkey" FOREIGN KEY (log_id) REFERENCES logs(id)
Access method: heap
There are a small number of records in logs and not all of the log_ids are used in the log_records table:
=# select count(*) from logs;
 count
-------
    13
(1 row)

=# select count(*) from log_records;
  count
----------
 14552534
(1 row)

=# select count(distinct log_id) from log_records;
 count
-------
     8
(1 row)
I want to query the log_records table for log records belonging to a named log. However, if I pass the name of a log that has not got any records in the log_records table the query is very slow:
# explain analyze 
SELECT e.id, e.payload, e.created_at, e.headers 
FROM log_records e JOIN logs l ON l.id = e.log_id 
WHERE l.name = 'Log1' 
ORDER BY e.id 
LIMIT 100;
                                                                             QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.43..244.61 rows=100 width=395) (actual time=7371.696..7371.698 rows=0 loops=1)
   ->  Nested Loop  (cost=0.43..2729140.04 rows=1117680 width=395) (actual time=7371.696..7371.697 rows=0 loops=1)
         Join Filter: (e.log_id = l.id)
         Rows Removed by Join Filter: 14552570
         ->  Index Scan using log_records_pkey on log_records e  (cost=0.43..2511191.29 rows=14529839 width=403) (actual time=0.012..4955.006 rows=14552570 loops=1)
         ->  Materialize  (cost=0.00..1.17 rows=1 width=8) (actual time=0.000..0.000 rows=1 loops=14552570)
               ->  Seq Scan on logs l  (cost=0.00..1.16 rows=1 width=8) (actual time=0.005..0.005 rows=1 loops=1)
                     Filter: ((name)::text = 'Log1'::text)
                     Rows Removed by Filter: 12
 Planning Time: 0.233 ms
 Execution Time: 7371.717 ms
(11 rows)
I get a different plan and better but still poor performance for a scalar subquery:
# explain analyze 
SELECT e.id, e.payload, e.created_at, e.headers
FROM log_records e 
WHERE e.log_id = 
  (SELECT id FROM logs WHERE name = 'Log1') 
ORDER BY e.id 
LIMIT 100;
                                                                        QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=1.60..141.86 rows=100 width=395) (actual time=3686.420..3686.421 rows=0 loops=1)
   InitPlan 1 (returns $0)
     ->  Seq Scan on logs  (cost=0.00..1.16 rows=1 width=8) (actual time=0.007..0.008 rows=1 loops=1)
           Filter: ((name)::text = 'Log1'::text)
           Rows Removed by Filter: 12
   ->  Index Scan using log_records_pkey on log_records e  (cost=0.43..2547511.66 rows=1816230 width=395) (actual time=3686.419..3686.419 rows=0 loops=1)
         Filter: (log_id = $0)
         Rows Removed by Filter: 14552594
 Planning Time: 0.318 ms
 Execution Time: 3686.487 ms
(10 rows)
However if I hardcode the log_id as a literal I get good performance:
# explain analyze 
SELECT e.id, e.payload, e.created_at, e.headers 
FROM log_records e 
WHERE e.log_id = 13 
ORDER BY e.id 
LIMIT 100;
                                                                     QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=4.08..4.09 rows=1 width=395) (actual time=0.015..0.015 rows=0 loops=1)
   ->  Sort  (cost=4.08..4.09 rows=1 width=395) (actual time=0.014..0.015 rows=0 loops=1)
         Sort Key: id
         Sort Method: quicksort  Memory: 25kB
         ->  Index Scan using idx_log_records_log_id on log_records e  (cost=0.43..4.07 rows=1 width=395) (actual time=0.011..0.011 rows=0 loops=1)
               Index Cond: (log_id = 13)
 Planning Time: 0.098 ms
 Execution Time: 0.028 ms
(8 rows)
Why can't I get this type of performance from the joins or subquery? Why is the subquery faster than the join when most advice would say don't use subqueries?
willn-cpx (43 rep)
Aug 1, 2025, 10:24 AM • Last activity: Aug 6, 2025, 12:50 AM
0 votes
1 answers
528 views
Insert into table select - Replication lag - Percona Server 5.6
I have two MySQL instances (Percona server 5.6.31) in Master-Slave replication setup. I have set the below configuration: 1. ROW based replication is set. 2. Transaction Isolation is set to read-committed. Today, there was a insert going on in my Master. It was in the format INSERT INTO table1 SELEC...
I have two MySQL instances (Percona server 5.6.31) in Master-Slave replication setup. I have set the below configuration: 1. ROW based replication is set. 2. Transaction Isolation is set to read-committed. Today, there was a insert going on in my Master. It was in the format INSERT INTO table1 SELECT * FROM table2 Table 2 has 200 million rows. Though the number of insert records was only 5000 but the operation lasted for 30 mins. I observed replication lag during the insert operation. I have load infile disabled due to security concerns. Hence I can't insert using that as well. I went this article from Percona which says that this can be resolved if txn isolation is used as ROW and versions above 5.1 that this is resolved. 1. In what way I can make my slave to be in sync with Master in such conditions? 2. Why does the slave lag here?
tesla747 (1910 rep)
Dec 28, 2016, 04:08 PM • Last activity: Aug 6, 2025, 12:02 AM
1 votes
1 answers
18476 views
Log-in as service account in SQL Server Management Studio
The stored procedure that performs a CTE in production is being executed using a service account. However, the stored procedure returns empty and I tried checking this using SSMS but I my domain account has no execute functionality in production. I tried logging into SQL Server Management Studio usi...
The stored procedure that performs a CTE in production is being executed using a service account. However, the stored procedure returns empty and I tried checking this using SSMS but I my domain account has no execute functionality in production. I tried logging into SQL Server Management Studio using the service account but I ran through some problems. First was resolved by following this link - https://dba.stackexchange.com/questions/173785/the-certificate-chain-was-issued-by-an-authority-that-is-not-trusted However, once that is executed, it returns as if no user name was provided: enter image description here Question is, can I login using a service account in SQL Server Management Studio?
Patrick (111 rep)
Oct 21, 2018, 07:27 AM • Last activity: Aug 5, 2025, 11:07 PM
1 votes
1 answers
1399 views
Redshift - is it possible to run query that select from external table - only if table exist
what i am try to do is something like this: select case when (/* here should come sql statement that check if table exist */) then (select count(*) from my_external_table) else 0 end I know that theoretically i could use the SVV_ALL views , but it's sometimes it's cuasing the query to be slow .. As...
what i am try to do is something like this: select case when (/* here should come sql statement that check if table exist */) then (select count(*) from my_external_table) else 0 end I know that theoretically i could use the SVV_ALL views , but it's sometimes it's cuasing the query to be slow .. As an example for what i am looking for , this postgres query could be great for me , but Redshift do not support the to_regclass function: select case when exist(select * from (select to_regclass('my_table') as rg) where rg IS NOT NULL ) then (select count(*) from my_table) else 0 end
Eyal leshem (145 rep)
Apr 27, 2022, 08:57 AM • Last activity: Aug 5, 2025, 10:04 PM
0 votes
1 answers
139 views
How do I add 2 missing nodes to MariaDB Galera Cluster using IST and not SST
I inherited an issue where a MariaDB Galera Cluster has only one out of 3 nodes working. Node1: up and running in production and is accepting reads and writes Node2: fresh install it just need to sync with node 1 Node3: fresh install it just need to sync with node 1 If I run `service mysql start` on...
I inherited an issue where a MariaDB Galera Cluster has only one out of 3 nodes working. Node1: up and running in production and is accepting reads and writes Node2: fresh install it just need to sync with node 1 Node3: fresh install it just need to sync with node 1 If I run service mysql start on Node2/Node3 it changes the state of wsrep_local_state_comment from Synced to Donor/Desynced and a SST is triggered but I will have a lot of downtime. I have a 2TB of databases and it's going to take about 5-7 hours for a newly added node to sync, and I can't afford the downtime (as an idea, during the day I have 1GB/minute of binlogs and at night about 4GB/hour of binlogs). I take a db backup every night with Percona XtraBackup and I know how to restore it on a fresh server. I have galera.cache set at 32GB. Is there a way of restoring the other 2 nodes without triggering an SST? I was thinking of restoring the backup of Node1 to Node2 maybe it will trigger an IST.
netrangermike (1 rep)
Feb 21, 2023, 12:16 AM • Last activity: Aug 5, 2025, 09:06 PM
1 votes
1 answers
1770 views
Multiple rows vs Single row with JSON
What is the most optimal way to do this? - In TV Show page I need to get all progress from all episodes in that TV Show - In Homepage I need to get the continue_watching - In Episode I need to get progress of the episode. - Some TV Shows have more than 100 episodes Ex: ```none USER_ID | SHOW_ID | LA...
What is the most optimal way to do this? - In TV Show page I need to get all progress from all episodes in that TV Show - In Homepage I need to get the continue_watching - In Episode I need to get progress of the episode. - Some TV Shows have more than 100 episodes Ex:
USER_ID | SHOW_ID | LAST_SEEN_ID | LAST_SEEN_TIME
10      | 5       | 12           | ---

USER_ID | EPISODE_ID | PROGRESS
10      | 10         | 15
10      | 11         | 20
10      | 12         | 95
OR
USER_ID | SHOW_ID | PROGRESS                     | LAST_SEEN_ID | LAST_SEEN_TIME
10      | 5       | {"10":15, "11":20, "12": 95} | 12           | ---
In PosgreSQL I can get the individual progress with: SELECT progress->'10' as progress... I think that the best method is the first but it will create a lot of rows and this could make DB slower?
Angel Vega (11 rep)
Apr 1, 2020, 11:10 PM • Last activity: Aug 5, 2025, 08:04 PM
0 votes
1 answers
1374 views
mysql (mariadb) won't start after power outage (MacOS)
/var/log/system.log says (over and over) ``` Oct 13 19:34:01 Data-Server com.apple.xpc.launchd[1] (com.mariadb.server[2128]): Service exited with abnormal code: 1 Oct 13 19:34:01 Data-Server com.apple.xpc.launchd[1] (com.mariadb.server): Service only ran for 0 seconds. Pushing respawn out by 10 seco...
/var/log/system.log says (over and over)
Oct 13 19:34:01 Data-Server com.apple.xpc.launchd (com.mariadb.server): Service exited with abnormal code: 1
Oct 13 19:34:01 Data-Server com.apple.xpc.launchd (com.mariadb.server): Service only ran for 0 seconds. Pushing respawn out by 10 seconds.
/usr/local/var/mysql/Data-Server.local.err says (once recently, repeated a number of times well before the crash)
2020-10-13  2:44:25 20019181 [Warning] Aborted connection 20019181 to db: 'EcoReality' user: 'root' host: '10.1.2.2' (Got timeout reading communication packets)
First thing I did was to shutdown the launchctl entry, to keep it from constantly restarting. # launchctl unload /Library/LaunchDaemons/com.mariadb.server.plist Then I tried invoking mysqld manually:
# sudo  /usr/local/bin/mysqld -u mysql
2020-10-13 20:46:09 0 [Note] /usr/local/bin/mysqld (mysqld 10.4.6-MariaDB) starting as process 2364 ...
2020-10-13 20:46:09 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-10-13 20:46:09 0 [Note] InnoDB: Uses event mutexes
2020-10-13 20:46:09 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2020-10-13 20:46:09 0 [Note] InnoDB: Number of pools: 1
2020-10-13 20:46:09 0 [Note] InnoDB: Using SSE2 crc32 instructions
2020-10-13 20:46:09 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2020-10-13 20:46:09 0 [Note] InnoDB: Completed initialization of buffer pool
2020-10-13 20:46:09 0 [ERROR] InnoDB: Invalid log block checksum. block: 81635496 checkpoint no: 2609153 expected: 296846624 found: 3735928559
2020-10-13 20:46:09 0 [ERROR] InnoDB: Missing MLOG_CHECKPOINT at 41797373564 between the checkpoint 41797373564 and the end 41797373440.
2020-10-13 20:46:09 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
2020-10-13 20:46:09 0 [Note] InnoDB: Starting shutdown...
2020-10-13 20:46:09 0 [ERROR] Plugin 'InnoDB' init function returned error.
2020-10-13 20:46:09 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2020-10-13 20:46:09 0 [Note] Plugin 'FEEDBACK' is disabled.
2020-10-13 20:46:09 0 [Note] CONNECT: Version 1.06.0009 January 27, 2019
2020-10-13 20:46:09 0 [ERROR] Unknown/unsupported storage engine: InnoDB
2020-10-13 20:46:09 0 [ERROR] Aborting
So now I'm a bit stumped at the lack of diagnostic messages. Is there any way to coax more info out of mysqld when it goes down? Or should I just start incrementing innodb_force_recovery until something interesting happens?
Jan Steinman (191 rep)
Oct 14, 2020, 04:01 AM • Last activity: Aug 5, 2025, 07:01 PM
0 votes
0 answers
16 views
AWS Aurora MySQL table archive running slow for one table
I'm working on archiving a bunch of tables in an environment where archiving was never done, with some data going back 10 years. I've written a script to perform the work, which loops through the primary key (an autoincrement `bigint`) *n* rows at a time, calling a procedure to archive the data to a...
I'm working on archiving a bunch of tables in an environment where archiving was never done, with some data going back 10 years. I've written a script to perform the work, which loops through the primary key (an autoincrement bigint) *n* rows at a time, calling a procedure to archive the data to a separate table and then deleting that same data from the main table. I'm doing it in small batches to prevent any long term locking of the main tables. It also sleeps in between each loop iteration. Batch size and sleep time are configurable via a config file. On my test system, for this table, I'm using a batch size of 1000 and a sleep time of 0. Instance class is r7g.4xl. Most tables archive at several thousand rows per second, which is acceptable. But I have one table whose archiving is going very slowly; averaging under 550 rows/sec. There is no other activity in the database (there are other archives running against other DBs in the cluster at the same time, but killing them didn't improve the performance of this one). Here's the table schema (the schema for the archive table is identical):
CREATE TABLE inbox_item (
  id bigint NOT NULL AUTO_INCREMENT,
  user_id bigint NOT NULL,
  template_id bigint NOT NULL,
  url varchar(4000) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
  created_at datetime NOT NULL,
  hash varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
  parameters varchar(4000) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
  PRIMARY KEY (id),
  UNIQUE KEY hash_uidx (hash),
  KEY template_id_idx (template_id),
  KEY user_id_created_at_idx (user_id,created_at)
) ENGINE=InnoDB AUTO_INCREMENT=442872663 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Note that while there are two large varchars, total actual data width is under 300 bytes. Here's the procedure that's being called:
CREATE DEFINER=root@% PROCEDURE archive_inbox_item_proc(IN pkmin bigint, IN pkmax bigint, IN querymax bigint)
begin
        declare exit handler for sqlexception
        begin
                get diagnostics condition 1
                    @err = MYSQL_ERRNO, @msg = MESSAGE_TEXT;
                select -1;
                select concat('Error ', cast(@err as char), ': ',@msg) 'Error';
                rollback;
        end;

        start transaction;
        insert ignore into inbox.inbox_item_archive select arctable.* from inbox.inbox_item as arctable  where created_at = pkmin and arctable.id = pkmin and arctable.id < querymax and arctable.id <= pkmax ;
        select row_count();
        commit;
end
pkmin is always the actual minimum pkey value. There are no foreign keys or triggers referencing the table. Here's the table status:
Name: inbox_item
         Engine: InnoDB
        Version: 10
     Row_format: Dynamic
           Rows: 299879061
 Avg_row_length: 243
    Data_length: 72988737536
Max_data_length: 0
   Index_length: 126937300992
      Data_free: 45770342400
 Auto_increment: 442872663
    Create_time: 2025-03-28 06:15:36
    Update_time: 2025-08-05 18:04:55
     Check_time: NULL
      Collation: utf8mb4_unicode_ci
       Checksum: NULL
 Create_options:
        Comment:
Any ideas on what's causing this to run so slow relative to other tables in other databases?
Swechsler (153 rep)
Aug 5, 2025, 06:05 PM
0 votes
1 answers
1705 views
MySQL cursor always exits out of loop
The cursor query and `select value` query returns rows if I run it in `mysql` but when in a cursor it always exits out of loop. Anything wrong here? I've added "BEFORE LOOP", "EXIT" and "IN LOOP" so it prints where it is but it always starts with `BEFORE LOOP` and then ends with `EXIT`. CREATE PROCE...
The cursor query and select value query returns rows if I run it in mysql but when in a cursor it always exits out of loop. Anything wrong here? I've added "BEFORE LOOP", "EXIT" and "IN LOOP" so it prints where it is but it always starts with BEFORE LOOP and then ends with EXIT. CREATE PROCEDURE getTotal() BEGIN DECLARE HOSTID INTEGER; DECLARE cITEMID INT; declare finished bool default false; DECLARE Total INT; declare cur1 cursor for SELECT itemid FROM items WHERE hostid = 10579; declare continue handler for not found set finished = true; open cur1; loop_1: loop fetch cur1 into cITEMID; SELECT "BEFORE LOOP"; if finished then SELECT "EXIT"; leave loop_1; end if; SELECT "IN LOOP"; -- Test query SELECT value from history_uint WHERE itemid = cITEMID ORDER BY itemid DESC LIMIT 1; -- Final select query will look like this. -- SET @Total := @Total + (SELECT value from history_uint WHERE itemid = cITEMID ORDER BY itemid DESC LIMIT 1); -- SELECT @Total; end loop; close cur1; END // DELIMITER ; Queries: SELECT itemid FROM items WHERE hostid = 10579; | itemid | | 12345 | | 12346 | | 12347 | SELECT value from history_uint WHERE itemid = 12345 ORDER BY itemid DESC LIMIT 1; | value | | 1 | SELECT * from history_uint; | itemid | value | clock (unixtimestamp) | | 12345 | 13 | 4364564654654 | | 12346 | 1 | 4364564654657 | | 12347 | 16 | 4364564654654 | | 12345 | 13 | 4364564654756 | | 12346 | 2 | 4364564654753 | | 12347 | 15 | 4364564654756 | Note: The clock column value is just made up.
R0bert2 (121 rep)
Apr 1, 2020, 07:06 PM • Last activity: Aug 5, 2025, 06:00 PM
9 votes
1 answers
826 views
Cursoring over sys.databases skips databases
I know this question has been asked several times and I know the solution, but I am trying to understand the underlying cause of the problem: I have the following code to perform database backups. DECLARE @Filename VARCHAR(256) DECLARE @FileDate VARCHAR(15) DECLARE @Path VARCHAR(50) DECLARE @Name VA...
I know this question has been asked several times and I know the solution, but I am trying to understand the underlying cause of the problem: I have the following code to perform database backups. DECLARE @Filename VARCHAR(256) DECLARE @FileDate VARCHAR(15) DECLARE @Path VARCHAR(50) DECLARE @Name VARCHAR(50) -- specify database backup directory SET @Path = '\MyPath' -- specify filename date SELECT @FileDate = CONVERT(VARCHAR(20), GETDATE(), 112) + '_' + REPLACE(CONVERT(VARCHAR(20), GETDATE(), 108),':','') DECLARE db_cursor CURSOR FOR SELECT [name] FROM master.sys.databases WHERE [name] NOT IN ('master', 'msdb', 'model', 'tempdb') AND [state_desc] = 'ONLINE' OPEN db_cursor FETCH NEXT FROM db_cursor INTO @Name WHILE @@FETCH_STATUS = 0 BEGIN SET @Filename = @Path + @Name + '_Full_Backup_' + @FileDate + '.bak' BACKUP DATABASE @Name TO DISK = @Filename WITH CHECKSUM, COMPRESSION FETCH NEXT FROM db_cursor INTO @Name END CLOSE db_cursor DEALLOCATE db_cursor Sometimes, only some databases are backed up, suggesting that the cursor is not iterating over all of the rows returned or that the query itself is not returning the names of all the databases it should be. I am trying to understand *why* this happens. I know that the fix is to use a STATIC cursor, suggesting the issue is with the results in the underlying query SELECT [name] FROM master.sys.databases WHERE [name] NOT IN ('master', 'msdb', 'model', 'tempdb') AND [state_desc] = 'ONLINE' are changing, but I can't see what would be changing (no database names would change and the Error log doesn't suggest that the database state has changed)
SE1986 (2182 rep)
Mar 9, 2020, 05:06 PM • Last activity: Aug 5, 2025, 05:12 PM
0 votes
1 answers
2038 views
PostgreSQL get json key name after comparing values of keys
I have table which has a column of type json with name "demographic". I want to compare keys(predefined by me) by their values and the result of that should be the name of the key and not the value which later i will use to save to another column's rows. The json columns that i have and want to comp...
I have table which has a column of type json with name "demographic". I want to compare keys(predefined by me) by their values and the result of that should be the name of the key and not the value which later i will use to save to another column's rows. The json columns that i have and want to compare values looks like this:
{
   "he":{
      "he_vakiy":29384,
      "he_naiset":14803,
      "he_miehet":14581,
      "he_kika":1485,
      "he_0_2":850,
      "he_3_6"
   }
}
I want to compare f.ex "he_naiset" and "he_miehet" and the one that has highest value should return the name of the "key" ; Ive tried something like this: select greatest(demographics->'he'->>'he_miehet',demographics->'he'->>'he_naiset') as greatest from demographics; Which will compare the values and return the highest values. But I don't know how I can get the name of the key. I also tried something like this from: with cte as ( select * , genre.key as genre , row_number() over (partition by id order by value desc) as ord from base.maakunta cross join lateral json_each_text(base.maakunta.demographic->'he') genre ) select genre, value,ord from cte where ord = 1 ; But this one compares all the keys inside the JSON.
eko (101 rep)
Mar 20, 2022, 02:48 PM • Last activity: Aug 5, 2025, 05:03 PM
2 votes
1 answers
1861 views
MariaDB Inner join is slow
Here is my SQL (with explain): EXPLAIN SELECT * FROM GBI gbi INNER JOIN VF_KRED kred ON gbi.vendor = kred.vendor; Giving below the outcome: [![enter image description here][1]][1] Now, the `select` SQL takes ~10 seconds to produce the outcome. Below are the indexes on tables: - VF_KRED table: [![ent...
Here is my SQL (with explain): EXPLAIN SELECT * FROM GBI gbi INNER JOIN VF_KRED kred ON gbi.vendor = kred.vendor; Giving below the outcome: enter image description here Now, the select SQL takes ~10 seconds to produce the outcome. Below are the indexes on tables: - VF_KRED table: enter image description here - GBI table: enter image description here Any clues?
Akshay Lokur (121 rep)
Jun 25, 2019, 04:06 PM • Last activity: Aug 5, 2025, 04:05 PM
1 votes
0 answers
11 views
Usage of **point** data type
Running MySql database Ver (8.x) on Windows IIS on AWS EC2. The application is for aviation industry, so I have used the **point** data type for location fields, assuming that I would store latitude/longitude in it. I don't anticipate doing Geometry calculations, all they need to know is the distanc...
Running MySql database Ver (8.x) on Windows IIS on AWS EC2. The application is for aviation industry, so I have used the **point** data type for location fields, assuming that I would store latitude/longitude in it. I don't anticipate doing Geometry calculations, all they need to know is the distance between two points. They will probably trust some official source for that. I am unaware of how this data type actually works. What is one advantage to retaining **Point**?
Rohit Gupta (2126 rep)
Aug 5, 2025, 03:20 PM
0 votes
2 answers
140 views
Asking Suggestions regarding mysql optimization
Im running a dedicated server with around 20-25 sites, almost all of them running wordpress installations. Running it over a cpanel setup. For a while its mostly mysql eating most of the cpu and hitting high load times ``` mysql 0 61.69(cpu) 6.92(ram) /usr/sbin/mysqld ``` **Server config is** ``` Up...
Im running a dedicated server with around 20-25 sites, almost all of them running wordpress installations. Running it over a cpanel setup. For a while its mostly mysql eating most of the cpu and hitting high load times
mysql	0	  61.69(cpu)  6.92(ram)	/usr/sbin/mysqld
**Server config is**
Uptime             70 days 
Operating System   CentOS Linux 7 (Core) x64 File
Handles            14560 of 6511967 
Processes          342  
CPU Model          AMD Ryzen 5 3600 6-Core Processor
Ram                64GB
I am trying to improve this and came across mysql tuner, here is what it has to say after running mysql with performance schema on for 2 days or more. This isnt exactly my forte so the mycnf is just a blend of what I tried with an older mysqltuner suggestion but ive heard that the application would still require a human touch. Would appreciate some help in optimizing the settings.
>>  MySQLTuner 1.7.19 - Major Hayden 
 >>  Bug reports, feature requests, and downloads at http://mysqltuner.com/ 
 >>  Run with '--help' for additional options and output filtering

[--] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 10.3.27-MariaDB-log
[OK] Operating on 64-bit architecture

-------- Log file Recommendations ------------------------------------------------------------------
[OK] Log file /var/lib/mysql/server.name.here.err exists
[--] Log file: /var/lib/mysql/server.name.here.err(4M)
[OK] Log file /var/lib/mysql/server.name.here.err is readable.
[OK] Log file /var/lib/mysql/server.name.here.err is not empty
[OK] Log file /var/lib/mysql/server.name.here.err is smaller than 32 Mb
[!!] /var/lib/mysql/server.name.here.err contains 31430 warning(s).
[!!] /var/lib/mysql/server.name.here.err contains 23132 error(s).
[--] 60 start(s) detected in /var/lib/mysql/server.name.here.err
[--] 1) 2020-12-07  7:35:16 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 2) 2020-12-01 14:35:35 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 3) 2020-11-30 16:10:14 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 4) 2020-11-30 16:07:53 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 5) 2020-11-01  1:57:12 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 6) 2020-10-10 19:28:45 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 7) 2020-10-10 19:28:32 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 8) 2020-09-30  3:36:14 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 9) 2020-09-28 17:58:16 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 10) 2020-09-25 18:38:33 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 51 shutdown(s) detected in /var/lib/mysql/server.name.here.err
[--] 1) 2020-12-07  7:35:07 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 2) 2020-12-01 14:35:27 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 3) 2020-11-30 16:09:53 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 4) 2020-11-30 16:07:33 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 5) 2020-11-01  1:57:09 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 6) 2020-10-10 19:28:39 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 7) 2020-10-10 19:28:26 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 8) 2020-09-30  3:34:34 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 9) 2020-09-28 17:56:38 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 10) 2020-09-25 18:36:55 0 [Note] /usr/sbin/mysqld: Shutdown complete

-------- Storage Engine Statistics -----------------------------------------------------------------
[--] Status: +Aria +CSV +InnoDB +MEMORY +MRG_MyISAM +MyISAM +PERFORMANCE_SCHEMA +SEQUENCE
[--] Data in MyISAM tables: 2.1G (Tables: 1387)
[--] Data in InnoDB tables: 3.2G (Tables: 2207)
[--] Data in MEMORY tables: 586.4K (Tables: 3)
[OK] Total fragmented tables: 0

-------- Analysis Performance Metrics --------------------------------------------------------------
[--] innodb_stats_on_metadata: OFF
[OK] No stat updates during querying INFORMATION_SCHEMA.

-------- Security Recommendations ------------------------------------------------------------------
[OK] There are no anonymous accounts for any database users
[OK] All database users have passwords assigned
[!!] There is no basic password file list!

-------- CVE Security Recommendations --------------------------------------------------------------
[--] Skipped due to --cvefile option undefined

-------- Performance Metrics -----------------------------------------------------------------------
[--] Up for: 2d 5h 57m 24s (167M q [862.613 qps], 2M conn, TX: 21024G, RX: 379G)
[--] Reads / Writes: 98% / 2%
[--] Binary logging is disabled
[--] Physical Memory     : 62.8G
[--] Max MySQL memory    : 43.5G
[--] Other process memory: 0B
[--] Total buffers: 5.0G global + 260.7M per thread (151 max threads)
[--] P_S Max memory usage: 104M
[--] Galera GCache Max memory usage: 0B
[OK] Maximum reached memory usage: 26.5G (42.17% of installed RAM)
[OK] Maximum possible memory usage: 43.5G (69.35% of installed RAM)
[OK] Overall possible memory usage with other process is compatible with memory available
[OK] Slow queries: 0% (324K/167M)
[OK] Highest usage of available connections: 55% (84/151)
[OK] Aborted connections: 0.00%  (55/2878495)
[!!] name resolution is active : a reverse name resolution is made for each new connection and can reduce performance
[!!] Query cache may be disabled by default due to mutex contention.
[OK] Query cache efficiency: 40.4% (102M cached / 254M selects)
[!!] Query cache prunes per day: 3479297
[OK] Sorts requiring temporary tables: 0% (11K temp sorts / 6M sorts)
[!!] Joins performed without indexes: 12813
[!!] Temporary tables created on disk: 66% (2M on disk / 3M total)
[OK] Thread cache hit rate: 98% (40K created / 2M connections)
[OK] Table cache hit rate: 95% (4K open / 4K opened)
[OK] table_definition_cache(2097152) is upper than number of tables(3862)
[OK] Open file limit used: 7% (2K/40K)
[OK] Table locks acquired immediately: 99% (13M immediate / 13M locks)

-------- Performance schema ------------------------------------------------------------------------
[--] Memory used by P_S: 104.0M
[--] Sys schema is installed.

-------- ThreadPool Metrics ------------------------------------------------------------------------
[--] ThreadPool stat is enabled.
[--] Thread Pool Size: 12 thread(s).
[--] Using default value is good enough for your version (10.3.27-MariaDB-log)

-------- MyISAM Metrics ----------------------------------------------------------------------------
[!!] Key buffer used: 24.9% (182M used / 734M cache)
[OK] Key buffer size / total MyISAM indexes: 700.0M/460.2M
[OK] Read Key buffer hit rate: 100.0% (1B cached / 41K reads)
[!!] Write Key buffer hit rate: 69.5% (804K cached / 558K writes)

-------- InnoDB Metrics ----------------------------------------------------------------------------
[--] InnoDB is enabled.
[--] InnoDB Thread Concurrency: 0
[OK] InnoDB File per table is activated
[OK] InnoDB buffer pool / data size: 4.0G/3.2G
[!!] Ratio InnoDB log file size / InnoDB Buffer pool size (0.78125 %): 16.0M * 2/4.0G should be equal to 25%
[!!] InnoDB buffer pool instances: 8
[--] Number of InnoDB Buffer Pool Chunk : 32 for 8 Buffer Pool Instance(s)
[OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances
[OK] InnoDB Read buffer efficiency: 100.00% (102924116296 hits/ 102924220151 total)
[!!] InnoDB Write Log efficiency: 73.75% (4679039 hits/ 6344450 total)
[OK] InnoDB log waits: 0.00% (0 waits / 1665411 writes)

-------- AriaDB Metrics ----------------------------------------------------------------------------
[--] AriaDB is enabled.
[OK] Aria pagecache size / total Aria indexes: 128.0M/1B
[OK] Aria pagecache hit rate: 99.1% (277M cached / 2M reads)

-------- TokuDB Metrics ----------------------------------------------------------------------------
[--] TokuDB is disabled.

-------- XtraDB Metrics ----------------------------------------------------------------------------
[--] XtraDB is disabled.

-------- Galera Metrics ----------------------------------------------------------------------------
[--] Galera is disabled.

-------- Replication Metrics -----------------------------------------------------------------------
[--] Galera Synchronous replication: NO
[--] No replication slave(s) for this server.
[--] Binlog format: MIXED
[--] XA support enabled: ON
[--] Semi synchronous replication Master: OFF
[--] Semi synchronous replication Slave: OFF
[--] This is a standalone server

-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
    Control warning line(s) into /var/lib/mysql/server.name.here.err file
    Control error line(s) into /var/lib/mysql/server.name.here.err file
    Configure your accounts with ip or subnets only, then update your configuration with skip-name-resolve=1
    Increasing the query_cache size over 128M may reduce performance
    We will suggest raising the 'join_buffer_size' until JOINs not using indexes are found.
             See https://dev.mysql.com/doc/internals/en/join-buffer-size.html 
             (specially the conclusions at the bottom of the page).
    When making adjustments, make tmp_table_size/max_heap_table_size equal
    Reduce your SELECT DISTINCT queries which have no LIMIT clause
    Before changing innodb_log_file_size and/or innodb_log_files_in_group read this: https://bit.ly/2TcGgtU 
Variables to adjust:
    query_cache_size (=0)
    query_cache_type (=0)
    query_cache_size (> 128M) [see warning above]
    join_buffer_size (> 2.0M, or always use indexes with JOINs)
    tmp_table_size (> 32M)
    max_heap_table_size (> 32M)
    innodb_log_file_size should be (=512M) if possible, so InnoDB total log files size equals to 25% of buffer pool size.
    innodb_buffer_pool_instances(=4)
**And here is what my.cnf currently has** **Note:** This is now edited based on suggestions by Wilson and Rick, Will save and restart mysql and get back to you guys with updates.
[client-server]

#
# include all files from the config directory
#
!includedir /etc/my.cnf.d

[mysqld]
# Logging and performance measurement here
log-error=/var/lib/mysql/ryzen.dogestream.com.err
# for enhanced slow query log
log_slow_verbosity=query_plan,explain
performance-schema=1


max_allowed_packet=268435456
max_heap_table=32M
tmp_table_size=32M
open_files_limit=40000


# Buffer sizes 
join_buffer_size=2M
key_buffer_size=700M
sort_buffer_size=2M


# InnoDB stuff goes here
innodb_file_per_table=1
innodb_buffer_pool_size=15G
innodb_log_file_size=16M
local-infile=0
# from 1024 to conserve 90% of CPU cycles used for function
innodb_lru_scan_depth=100
# should always match table_open_cache
innodb_open_files=9000


# Query stuff goes here
# from 128M to conserve RAM for more useful purposes
query_cache_size=0
# from 1 for OFF to avoid 3 million+ query cache prunes per day
query_cache_type=0
# from 2M to conserve RAM
query_cache_limit=0
slow_query_log=1
slow_query_log_file=mysql-slow.log
long_query_time=0.1


# It says cache it is here
table_definition_cache=-1
# from 3000 to reduce tables opened_tables count
table_open_cache=9000
# from 16 to accomodate your 84 concurrent users
thread_cache_size=256
Sawada Tsunayoshi (1 rep)
Dec 9, 2020, 12:52 PM • Last activity: Aug 5, 2025, 03:06 PM
0 votes
2 answers
139 views
Unable to Add New Database in CloudPanel VPS - Access denied for user 'root'@'localhost'
**Problem** I'm facing an issue with my VPS running CloudPanel where I cannot add new databases due to MySQL root access being denied. **Environment** - CloudPanel on VPS - MySQL Server - Hosting Provider: Hostinger **Error Message** `Access denied for user 'root'@'localhost'` **Steps I've Tried** *...
**Problem** I'm facing an issue with my VPS running CloudPanel where I cannot add new databases due to MySQL root access being denied. **Environment** - CloudPanel on VPS - MySQL Server - Hosting Provider: Hostinger **Error Message** Access denied for user 'root'@'localhost' **Steps I've Tried** **1. Attempted direct MySQL login:** mysql -u root -p Result: Access denied error, tried multiple password combinations including: - Root user password - Hostinger password - Default passwords like (12345, admin, root, etc..) - No password **2. Attempted MySQL safe mode:** sudo systemctl stop mysql sudo mysqld_safe --skip-grant-tables & Result: Error message: MySQL cannot start in safe mode because the required directory for the UNIX socket file does not exist. **3. Attempted to kill running processes:** sudo kill -9 Result: No process found with specified PID **4. Created missing directory and set permissions:** sudo mkdir -p /var/run/mysqld sudo chown -R mysql:mysql /var/run/mysqld **5. Restarted MySQL in safe mode and verified process:** sudo mysqld_safe --skip-grant-tables & ps aux | grep mysql Result: Process found running **6. Attempted passwordless root login:** mysql -u root Result: Still encountering access denied error **Additional Context** - Hostinger support was unable to resolve the issue - CloudPanel tutorials show database creation through UI without command-line intervention - Suspect issue might be related to CloudPanel's default MySQL user restrictions *I have reached out to Hostinger support, but they were unable to provide a solution beyond what I have already attempted. Additionally, all the YouTube tutorials on setting up and creating a database in CloudPanel show a straightforward process with no access issues, as everything can be done directly from the UI without requiring any command-line input. I suspect it might be related to CloudPanel’s default MySQL user restrictions or authentication settings, but I’m not sure how to proceed. Has anyone encountered a similar issue or knows how to regain root access?*
Joe (1 rep)
Feb 2, 2025, 10:31 AM • Last activity: Aug 5, 2025, 02:10 PM
0 votes
1 answers
67 views
Cannot import a database dump on Postgres 13.14+ while it loads fine in Postgres 13.13
I'm experiencing a problem with loading a PostgreSQL backup file (SQL format). The SQL file has a function that is defined after another function where it's used. PostgreSQL 13.13 can handle such a backup file, while PostgreSQL 13.14 fails to load it: ``` ERROR: function public.label_id_constant() d...
I'm experiencing a problem with loading a PostgreSQL backup file (SQL format). The SQL file has a function that is defined after another function where it's used. PostgreSQL 13.13 can handle such a backup file, while PostgreSQL 13.14 fails to load it:
ERROR:  function public.label_id_constant() does not exist
LINE 1:  SELECT public.uuid_increment($1, public.label_id_constant()...
                                          ^
HINT:  No function matches the given name and argument types. You might need to add explicit type casts.
QUERY:   SELECT public.uuid_increment($1, public.label_id_constant()) 
CONTEXT:  SQL function "label_id" during inlining
I've double-checked if there is SET check_function_bodies = false; in the dump file. I also searched if I could disable the inlining during the dump load, but still no success. I've distilled the dump file into a minimal reproducible example and attached it as a script to this ticket. If anybody experienced anything similar, please help.
#!/bin/env bash

DUMP_FILE=$(mktemp)
trap "rm -f $DUMP_FILE" EXIT

cat - > "$DUMP_FILE" &2

docker run -d \
       --name postgres-13.13 \
       -e POSTGRES_HOST_AUTH_METHOD=trust \
       -p 5432:5432 \
       postgres:13.13


echo "Waiting for postgres to start" >&2
while ! docker exec postgres-13.13 pg_isready -h localhost -U postgres -q; do
    sleep 1
done

cat "$DUMP_FILE" | psql -h localhost -U postgres -v ON_ERROR_STOP=1 --port 5432 -e -1 && echo "******** Success ********" || echo "******** Failure ********"


docker stop postgres-13.13
docker rm postgres-13.13

echo "Testing with postgres 13.14" >&2

docker run -d \
       --name postgres-13.14 \
       -e POSTGRES_HOST_AUTH_METHOD=trust \
       -p 5432:5432 \
       postgres:13.14

echo "Waiting for postgres to start" >&2
while ! docker exec postgres-13.14 pg_isready -h localhost -U postgres -q; do
    sleep 1
done

cat "$DUMP_FILE" | psql -h localhost -U postgres -v ON_ERROR_STOP=1 --port 5432 -e -1 && echo "******** Success ********" || echo "******** Failure ********"

docker stop postgres-13.14
docker rm postgres-13.14
-------- UPD: What I've already tried: Setting SET jit = off; doesn't fix the problem. UPD2: 1. I tried exporting our database using pg_dump, instead of the CloudSQL export API. It gave me the same error. 2. I tried to export the database, load it to 13.13, then export it from 13.13 and load it to 13.14, but the error was the same again. --- UPD: I successfully migrated the DB with the following script: https://paste.ubuntu.com/p/kgGGQzNcgp/ After migrating to PostgreSQL 17.5, the issue persists. If I dump the DB with pg_dump, I cannot load it with the same error.
Renat (101 rep)
Jan 21, 2025, 08:12 AM • Last activity: Aug 5, 2025, 01:46 PM
1 votes
2 answers
145 views
How to concatenate multiple JSON arrays into a single array in MariaDB?
I'm working with a MariaDB database where I have a table with two columns: `id` and `values`, where `values` contains JSON arrays. Here is a simplified example of the data: | id | values | |----|-------------| | 1 | "[1, 2, 3]" | | 1 | "[5]" | | 2 | "[4]" | I want to group the rows by `id` and conca...
I'm working with a MariaDB database where I have a table with two columns: id and values, where values contains JSON arrays. Here is a simplified example of the data: | id | values | |----|-------------| | 1 | "[1, 2, 3]" | | 1 | "" | | 2 | "" | I want to group the rows by id and concatenate the JSON arrays in the values column into a single array, like this: | id | values | |----|---------------| | 1 | "[1, 2, 3, 5]"| | 2 | "" | I tried using JSON_ARRAYAGG but it creates an array of arrays, and I can't figure out how to merge the arrays into one. How can I achieve this in MariaDB?
SELECT id, JSON_ARRAYAGG(values)
  FROM REC
GROUP BY id
Bonus question: How to output only unique values?
Emax (111 rep)
Dec 16, 2024, 01:48 PM • Last activity: Aug 5, 2025, 01:04 PM
21 votes
4 answers
5278 views
SQL Server cardinality hint
Is there a way how to 'inject' a cardinality estimation to a SQL Server optimizer (any version)? i.e. something similar to Oracle's cardinality hint. My motivation is driven by the article, [How Good Are Query Optimizers, Really?][1] \[1] , where they test the influence of the cardinality estimator...
Is there a way how to 'inject' a cardinality estimation to a SQL Server optimizer (any version)? i.e. something similar to Oracle's cardinality hint. My motivation is driven by the article, How Good Are Query Optimizers, Really? \[1] , where they test the influence of the cardinality estimator on a selection of a bad plan. Therefore, it would be sufficient if I could force the SQL Server to 'estimate' the cardinalities precisely for complex queries. --- \[1] Leis, Viktor, et al. "How good are query optimizers, really?" Proceedings of the VLDB Endowment 9.3 (2015): 204-215.
Radim Bača (233 rep)
Mar 31, 2017, 07:17 AM • Last activity: Aug 5, 2025, 12:16 PM
1 votes
1 answers
1537 views
MySQL Workbench Administrator - GUI "Access denied for user root " gotcha
Here is a small newbie problem I found and resolved. The behavior of the GUI is a little confusing so I figure I will share. Running some Java examples, I have MySQL set up on Windows. After making a little progress I ran into a problem with a sample database JDBC connection. I wanted to look at tha...
Here is a small newbie problem I found and resolved. The behavior of the GUI is a little confusing so I figure I will share. Running some Java examples, I have MySQL set up on Windows. After making a little progress I ran into a problem with a sample database JDBC connection. I wanted to look at that database connection's admin user info in the Workbench to see what's going on. So I click "Users and Privileges" - the gui pops up a message: "Access denied for user 'root'@'localhost' (using password: YES)" - what? I know the root password, set it myself. There was no opportunity to enter it in the GUI just now, only a rude popup. So I went hunting for some tip on how to update a config file for the GUI to read the root password.
charles ross (163 rep)
May 29, 2018, 03:38 PM • Last activity: Aug 5, 2025, 12:04 PM
Showing page 2 of 20 total questions