Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
3223
views
Getting by list of ids: `unnest() JOIN` vs `= ANY()`
Here I have an array of 42 ids from a table of 800,000 and I want to lookup the rows with those ids. I know of two ways to do this (besides creating a lot of parameters -- that one is harder to write -- assume it's off the table): # unnest() JOIN EXPLAIN ANALYZE SELECT * FROM unnest('{234095,2366560...
Here I have an array of 42 ids from a table of 800,000 and I want to lookup the rows with those ids.
I know of two ways to do this (besides creating a lot of parameters -- that one is harder to write -- assume it's off the table):
# unnest() JOIN
EXPLAIN ANALYZE
SELECT *
FROM unnest('{234095,2366560,234391,234480,239511,237504,239629,247756,239142,232372,2366563,238511,242756,243396,235301,240990,233823,234537,246556,240453,245295,238909,236157,233724,236514,245384,249595,239953,233381,232180,232242,232990,237354,2366572,232066,242180,239958,239036,2366578,2166609,2197037,236542}'::int[]) AS d(id)
JOIN phi.patient AS p ON d.id = p.id;
Nested Loop (cost=0.43..345.25 rows=100 width=133) (actual time=0.049..0.326 rows=42 loops=1)
-> Function Scan on unnest d (cost=0.00..1.00 rows=100 width=4) (actual time=0.015..0.025 rows=42 loops=1)
-> Index Scan using patient_pkey on patient p (cost=0.42..3.44 rows=1 width=129) (actual time=0.006..0.006 rows=1 loops=42)
Index Cond: (id = d.id)
Planning Time: 0.200 ms
Execution Time: 0.374 ms
(6 rows)
# = ANY()
EXPLAIN ANALYZE
SELECT *
FROM phi.patient
WHERE id = ANY('{234095,2366560,234391,234480,239511,237504,239629,247756,239142,232372,2366563,238511,242756,243396,235301,240990,233823,234537,246556,240453,245295,238909,236157,233724,236514,245384,249595,239953,233381,232180,232242,232990,237354,2366572,232066,242180,239958,239036,2366578,2166609,2197037,236542}');
Index Scan using patient_pkey on patient (cost=0.42..119.51 rows=42 width=129) (actual time=0.049..0.258 rows=42 loops=1)
Index Cond: (id = ANY ('{234095,2366560,234391,234480,239511,237504,239629,247756,239142,232372,2366563,238511,242756,243396,235301,240990,233823,234537,246556,240453,245295,238909,236157,233724,236514,245384,249595,239953,233381,232180,232242,232990,237354,2366572,232066,242180,239958,239036,2366578,2166609,2197037,236542}'::bigint[]))
Planning Time: 0.162 ms
Execution Time: 0.295 ms
(4 rows)
---
In this simple testing and testing with ~700 ids, I see no discernible difference between these approaches.
But they produce different query plans. And it past, I remember encountering performance differences (though unfortunately I can't seem to repro them now).
Are these two approaches materially different? Is there a reason to prefer one over the other?
One thing that I thought of was that unnest can work for a composite key lookup (searching by two indexed fields, not just one).
Paul Draper
(800 rep)
Feb 28, 2020, 05:01 PM
• Last activity: Aug 6, 2025, 03:10 AM
2
votes
1
answers
1861
views
MariaDB Inner join is slow
Here is my SQL (with explain): EXPLAIN SELECT * FROM GBI gbi INNER JOIN VF_KRED kred ON gbi.vendor = kred.vendor; Giving below the outcome: [![enter image description here][1]][1] Now, the `select` SQL takes ~10 seconds to produce the outcome. Below are the indexes on tables: - VF_KRED table: [
select
SQL takes ~10 seconds to produce the outcome.
Below are the indexes on tables:
- VF_KRED table:


Akshay Lokur
(121 rep)
Jun 25, 2019, 04:06 PM
• Last activity: Aug 5, 2025, 04:05 PM
0
votes
2
answers
155
views
mysql query performance... got to be a better way
I have a query right now that for a lack of a better word, sucks. I'm looking for some help with respect to other ways to approach writing this query. Here are the details. I've got a temp table build up with the following columns: - `AssignmentSubType` - `AssignmentValue` - `OwnerUsergroupID` - `As...
I have a query right now that for a lack of a better word, sucks. I'm looking for some help with respect to other ways to approach writing this query. Here are the details.
I've got a temp table build up with the following columns:
-
AssignmentSubType
- AssignmentValue
- OwnerUsergroupID
- AssociatedObjectID
The basic gist of what I want to do is to find AssociatedObjectID
s that have the same AssignmentSubType
, AssignmentValue
, and OwnerUsergroupID
. This is because there is a ton of SQL down the road that really doesn't need to be run if I've got "duplicates" (same AssignmentSubType
, AssignmentValue
, and OwnerUsergroupID
). Once I've got my list of duplicates I run the next set of SQL stuff on one of the duplicates , when I've got back info from that I join back the rest of the duplicates for the final result set.
Example table data:
AssignmentSubType | AssignmentValue | OwnerUsergroupID | AssociatedObjectID
retailer | Dicks | 1 | 5
retailer | Dicks | 1 | 7
retailer | Dicks | 1 | 9
In this example I just want to do the calculations on associatedobjecid = 5 since 7 and 9 will be the exact same, and then join back in 7 and 9 at the 'end'
I've got the following query that works but is very inefficient
SELECT firstObject, f2.AssociatedObjectID
FROM (
SELECT firstObject, dupAss AS dups
FROM (
SELECT min(AssociatedObjectID) AS firstObject, group_concat(concat('-',AssociatedObjectID,'-')) AS dupAss
FROM (
SELECT * FROM tableFromAbove
) innerRes GROUP BY AssignmentSubType, AssignmentValue, OwnerUsergroupID
) outR
) outR2
LEFT JOIN tableFromAbove f2
ON outR2.dups LIKE concat('%-',f2.AssociatedObjectID,'-%') ORDER BY firstObject
this query will give me back a result set like the following
firstObject | AssociatedObjectID
5 | 7
5 | 9
like I said earlier in this post... I then use this result set to join back to the results from the rest of the queries for associatedobjectid = 5. Any thoughts on how I could restructure this so that it is more efficient?
Josh
(159 rep)
Apr 19, 2018, 02:57 PM
• Last activity: Aug 4, 2025, 08:04 AM
0
votes
2
answers
146
views
Moving tables to different database within same sql server
There is a SQL server, there are around 100 databases in it. I have to query a few tables from one of the databases. when I query, it's very slow and I think CPU utilization is very high at that time. I have also noticed that there are queries from other tables from other services which are affectin...
There is a SQL server, there are around 100 databases in it. I have to query a few tables from one of the databases. when I query, it's very slow and I think CPU utilization is very high at that time. I have also noticed that there are queries from other tables from other services which are affecting the overall performance of querying from the database.
I am thinking to move these tables to a different database within the same SQL server. Do you think it will solve this issue? or it will not improve the performance of querying from my tables, I only bother about my tables. Will it have no impact because the new data will also be in the same SQL server? Please provide the answers in detail to my queries.
Vivek Nuna
(101 rep)
Jun 21, 2023, 07:00 AM
• Last activity: Aug 3, 2025, 07:06 AM
3
votes
1
answers
461
views
How to deal with seldom used big database and postgresql?
I loaded in PostgreSQL (9.3) OpenStreetMap data for whole europe (400gb). Then I installed a geocoding api (nominatim) that queries the database with gis queries. My problem is : > This database is queried a few times a day, but because postgres loads > the data in its buffer on demand, my first que...
I loaded in PostgreSQL (9.3) OpenStreetMap data for whole europe (400gb). Then I installed a geocoding api (nominatim) that queries the database with gis queries.
My problem is :
> This database is queried a few times a day, but because postgres loads
> the data in its buffer on demand, my first query on a particular gps
> point is always slow, and I do only one query per GPS point, so it's
> always slow (like 60sec slow against 100ms when the data is buffered)
.
What could I do here ?
- I'm using a A6 Azure instance (4 Cores, 28 GB memory, HDD).
- Server is Ubuntu 14.04.4 LTS
- Swapping is enabled
- There is index on the geometry columns queried.
- The shared buffer is 10GB
- work mem is 256MB
- maintenance work mem is 4GB
remi bourgarel
(259 rep)
Oct 4, 2016, 12:31 PM
• Last activity: Aug 2, 2025, 12:08 PM
0
votes
1
answers
150
views
DB2 Workfile scan
We just migrate from LUW to zos and we're finding that in z/os the accesspath explains show new node named WFSCAN (Workfile scan) and this rises when there is a join, group by or any kind of sort. The question is, Is any performance issue with WFSCAN. Should I be worry about the size of the pool it...
We just migrate from LUW to zos and we're finding that in z/os the accesspath explains show new node named WFSCAN (Workfile scan) and this rises when there is a join, group by or any kind of sort.
The question is, Is any performance issue with WFSCAN. Should I be worry about the size of the pool it uses or any other costs in production environment?
What should I do for better performance of Workfile Scans?
Hana Bzh
(101 rep)
Oct 17, 2018, 09:35 AM
• Last activity: Aug 2, 2025, 07:09 AM
1
votes
2
answers
142
views
Theoretical writes/sec limit
If I have solarwinds which has an alert set whenever we have more than 300 writes per second, and I know 300 writes per second is fine, and the writes per second often goes over 6500, how can I work out what the theoretical limit of writes per second is, so I can alter the alert thus making it actio...
If I have solarwinds which has an alert set whenever we have more than 300 writes per second, and I know 300 writes per second is fine, and the writes per second often goes over 6500, how can I work out what the theoretical limit of writes per second is, so I can alter the alert thus making it actionable?
If "The number of writes should not exceed 85% of the disk capacity" how can I work out the number of writes that are possible with my current disk configuration?
jack
(419 rep)
Feb 3, 2020, 03:08 PM
• Last activity: Aug 2, 2025, 05:04 AM
2
votes
1
answers
924
views
Delete using join or using sub-query?
I need to delete rows from a table based on what is present in a temporary table. For me, both of these statements work : DELETE from main_table where `id` in (select `deletable_id` from temporary_table); and DELETE main_table from main_table join temporary_table on main_table.id = temporary_table.d...
I need to delete rows from a table based on what is present in a temporary table.
For me, both of these statements work :
DELETE from main_table where
id
in (select deletable_id
from temporary_table);
and
DELETE main_table from main_table join temporary_table on main_table.id = temporary_table.deletable_id;
Which of the two is advisable to use given the fact that main_table will be having a billion of rows and the temporary will have a few thousands.
gaganbm
(141 rep)
Jun 12, 2015, 11:51 AM
• Last activity: Aug 1, 2025, 05:03 PM
0
votes
1
answers
145
views
Index for constantly updated rows
I have a cron on my server that runs every minute, it looks for all registered people who did not have their last connection updated in the last 10 minutes and changes the value in a column in the same table. However, to update this record informing that there was a connection, every 5 minutes the a...
I have a cron on my server that runs every minute, it looks for all registered people who did not have their last connection updated in the last 10 minutes and changes the value in a column in the same table.
However, to update this record informing that there was a connection, every 5 minutes the app triggers a request that makes an update.
My index would only consist of his status and the date of his last connection, without a primary key.
CREATE TABLE userwork (
user_id SERIAL PRIMARY KEY,
name VARCHAR(250),
owner_id INTEGER REFERENCES owner(owner_id),
lastconnection timestamptz,
status CHAR(1)
);
CREATE INDEX idx_user_lastconnection ON userwork (status, lastconnection);
My question is: Can an index for a line that will be constantly updated cause high stress for the server or is it best to create it to avoid an UPDATE that will not search in indexes, just using FILTERS?
Tom
(438 rep)
Feb 5, 2024, 05:08 PM
• Last activity: Jul 31, 2025, 11:05 AM
3
votes
2
answers
410
views
Tracking stored procedure performance metrics
I’m trying to track impacts of stored procedure executions - CPU time, execution duration, logical/physical reads, memory grants, etc. - in order to visualize them. In particular, I'm looking for results of a stored procedure as a whole, not of any individual statements within. I’ve explored several...
I’m trying to track impacts of stored procedure executions - CPU time, execution duration, logical/physical reads, memory grants, etc. - in order to visualize them. In particular, I'm looking for results of a stored procedure as a whole, not of any individual statements within.
I’ve explored several options - wrappers, DMVs,
sp_WhoIsActive
, custom logging procedures, etc. - but I’m still not fully convinced by the approaches I’ve used so far. How can I reliably capture stored procedure performance metrics over time?
Kaaniche Emna
(31 rep)
Jul 24, 2025, 02:51 PM
• Last activity: Jul 29, 2025, 05:33 PM
-1
votes
1
answers
145
views
Same server, same query, different response time
I have a development server that has some problem to access the data, user reporting that it's too much slow sometimes. The setup is: * virtual server; * 4 virtual CPU; * 8 GB of virtual memory ; * 80 GB of virtual HD (the real HD is a SDD one), I had still 36 GB available; * OS Debian 9; * Mysql 5....
I have a development server that has some problem to access the data, user reporting that it's too much slow sometimes.
The setup is:
* virtual server;
* 4 virtual CPU;
* 8 GB of virtual memory ;
* 80 GB of virtual HD (the real HD is a SDD one), I had still 36 GB available;
* OS Debian 9;
* Mysql 5.6.47;
To avoid all problems about network and the Web App, I simply do my queries directly on the host where Mysql is installed.
I had enabled the logging of slow query, and find the slowest query. This query start from a certain table, that I report below:
CREATE TABLE MALICIOUS_TABLE
(
column_1
int(11) NOT NULL AUTO_INCREMENT,
column_2
varchar(8) NOT NULL,
column_3
datetime NOT NULL,
column_4
int(11) NOT NULL,
column_5
int(11) DEFAULT NULL,
column_6
int(11) DEFAULT NULL,
column_7
int(11) DEFAULT NULL,
column_8
tinyint(1) DEFAULT NULL,
column_9
datetime DEFAULT NULL,
column_10
int(11) DEFAULT NULL,
column_11
varchar(2048) DEFAULT 'column_11',
column_12
tinyint(1) DEFAULT NULL,
column_13
datetime DEFAULT NULL,
column_14
tinyint(1) DEFAULT NULL,
PRIMARY KEY (column_1
),
KEY fk_ual_aut_idx
(column_2
),
KEY fk_aul_c_idx
(column_4
),
KEY kf_ual_po_idx
(column_5
),
KEY fk_ual_ute_idx
(column_10
),
KEY column_1
(column_1
),
KEY column_2
(column_2
),
CONSTRAINT fk_aul_c
FOREIGN KEY (column_4
) REFERENCES t_table2
(column_4
) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT fk_ual_aut
FOREIGN KEY (column_2
) REFERENCES t_tabl3
(column_2
) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT fk_ual_po
FOREIGN KEY (column_5
) REFERENCES t_table4
(column_5
) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT fk_ual_ute
FOREIGN KEY (column_10
) REFERENCES t_table5
(column_10
) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB AUTO_INCREMENT=2357917 DEFAULT CHARSET=latin1
The table has a not so small number of records:
select count(*) from MALICIOUS_TABLE
;
+----------+
| count(*) |
+----------+
| 2308414 |
+----------+
1 row in set (2,67 sec)
If I try the slowest query, always from the mysql command line on the server, every about 10 seconds, I got different response times, this is the production server, so users keep insert data:
SELECT count(*) FROM MALICIOUS_TABLE
WHERE column_4 = 1 AND (column_8 is null) AND column_3 > CURDATE() - INTERVAL 30 DAY;
+----------+
| count(*) |
+----------+
| 666411 |
+----------+
1 row in set (4,39 sec)
SELECT count(*) FROM MALICIOUS_TABLE
WHERE column_4 = 1 AND (column_8 is null) AND column_3 > CURDATE() - INTERVAL 30 DAY;
+----------+
| count(*) |
+----------+
| 666477 |
+----------+
1 row in set (4,94 sec)
SELECT count(*) FROM MALICIOUS_TABLE
WHERE column_4 = 1 AND (column_8 is null) AND column_3 > CURDATE() - INTERVAL 30 DAY;
+----------+
| count(*) |
+----------+
| 666752 |
+----------+
1 row in set (17,02 sec)
The last attempt has a great variation of response time. At the beginning I thought that maybe indexes are the problem, I drop them and recreate them. Yet I got the huge variation of the response time.
The RAM of the server it's good, still getting about 2 giga of free RAM.
The query caching of Mysql it's active, and maybe the second attempt retrieve the query from the cache, and the last one no.
Any suggestion of what I can check to understand the problem? The machine, the db (now I'm trying to modify query cache settings) or the table itself?
Thank you in advance.
Elleby
(19 rep)
May 6, 2020, 11:07 AM
• Last activity: Jul 29, 2025, 08:02 AM
0
votes
0
answers
64
views
Do you know about a triple store that performs well under frequent CRUD operations?
I'm working with RDF data in real-world applications that require frequent create, update, and delete operations — classic CRUD behavior. However, I’ve noticed that many triple stores seem heavily optimized for read-heavy, static workloads, and performance degrades significantly when performing a hi...
I'm working with RDF data in real-world applications that require frequent create, update, and delete operations — classic CRUD behavior. However, I’ve noticed that many triple stores seem heavily optimized for read-heavy, static workloads, and performance degrades significantly when performing a high volume of writes or updates.
---
I’m using a triple store (an RDF graph database) because my data is highly connected and changes often. RDF (Resource Description Framework) is a standard model for representing data as triples, simple statements in the form of: subject – predicate – object. It handles flexible structures and complex relationships way better than a traditional db. Plus, it supports reasoning. Something that’s not really feasible with a regular RDBMS. In terms of volume, we’re expecting anywhere between 5,000 to 7,000 transactions per minute, including a mix of reads, inserts, updates, and deletes.
Fox1994
(1 rep)
Jul 24, 2025, 08:59 AM
• Last activity: Jul 29, 2025, 06:38 AM
3
votes
2
answers
168
views
Optmizing ORDER BY
I'm new to MySQL and have written the following query (see below). **PROBLEM:** The query returns accurate data but takes about 40 seconds to execute. However, When I remove the `ORDER BY` part, it takes 0.06sec. I have indexes on each of the `ORDER BY` columns and am unsure what else to do. If anyb...
I'm new to MySQL and have written the following query (see below).
**PROBLEM:**
The query returns accurate data but takes about 40 seconds to execute.
However, When I remove the
ORDER BY
part, it takes 0.06sec.
I have indexes on each of the ORDER BY
columns and am unsure what else to do.
If anybody can rewrite/recommend changes it'd be greatly informative.
**CODE:**
SELECT DISTINCT username
FROM users
WHERE in_progress
= 0
AND scraped_from IN
(SELECT DISTINCT username
FROM source_accounts
WHERE group_users = 'New Users'
AND (type
= 'users' OR type
= 'both')
AND use
= '1')
AND username NOT IN
(SELECT user_tofollow
FROM follow_history
WHERE owner_account = 'admin')
ORDER BY real_user DESC,
IF((last_used) IS NULL,'0','1'),
IF((last_update)>3,'1','0'),
DATE(last_used),
IF((user_ratio)>100,'1','0') LIMIT 1000;
**EXPLAIN:**
1 PRIMARY scraped_users ref idx1,scraped_from,in_progress,username in_progress 1 const 687025 Using where; Using temporary; Using filesort
1 PRIMARY eq_ref 767 userstack.users.scraped_from 1 Distinct
2 MATERIALIZED source_accounts ref username,group_users,type,use group_users 767 const 48 Using index condition; Using where; Distinct
3 DEPENDENT SUBQUERY follow_history index_subquery user_tofollow user_tofollow 767 func 1 Using where
John Cliven
(131 rep)
Oct 2, 2015, 03:26 PM
• Last activity: Jul 29, 2025, 03:08 AM
0
votes
1
answers
215
views
Very slow JOIN + ORDER BY query only in dev environment
I have some production code and am setting up a new development environment. I found some pages were extremely slow to load. I profiled the code and found the slow query. This query is taking **75 seconds** to run in the new dev environment: SELECT `orders`.*, `users`.`user_flag` FROM `orders` LEFT...
I have some production code and am setting up a new development environment. I found some pages were extremely slow to load. I profiled the code and found the slow query.
This query is taking **75 seconds** to run in the new dev environment:
SELECT
orders
.*, users
.user_flag
FROM orders
LEFT JOIN users
ON users
.id
= orders
.user_id
WHERE cancelled
= '0' AND is_estimate
= '0'
ORDER BY orders
.updated_date
DESC, orders
.order_date
DESC, orders
.creation_date
DESC
LIMIT 30;
- It is a new installation of MySQL via MAMP Pro on a Mac
- The version of MySQL is **5.7.25**
...
**Exactly the same query** on **exactly the same database schema** takes only **24 ms** on my old machine which has this version:
mysql Ver 14.14 Distrib 5.7.18, for osx10.12 (x86_64) using EditLine wrapper
So presumably something about the configuration of MySQL on the new machine is causing this slow query with a join and ORDER BY.
**What my.cnf settings should I be looking for?**
---
**Further tests**
(I cannot change the code as it is production code but did these tests to narrow down the problem)
Removing the JOIN
brings the query time down to about **13ms**
SELECT orders
.*
FROM orders
WHERE cancelled
= '0' AND is_estimate
= '0'
ORDER BY orders
.updated_date
DESC, orders
.order_date
DESC, orders
.creation_date
DESC
LIMIT 30;
Removing the ORDER BY
statement brings the time down to about **200ms** (still too slow)
SELECT orders
.*, users
.user_flag
FROM orders
LEFT JOIN users
ON users
.id
= orders
.user_id
WHERE cancelled
= '0' AND is_estimate
= '0'
LIMIT 30;
(No JOIN
or ORDER BY
and it runs in 1.5ms)
----
The old MySQL config contains:
[mysqld]
# Only allow connections from localhost
bind-address = 127.0.0.1
max_allowed_packet = 64M
The new MySQL config has lots in it:
# The following options will be passed to all MySQL clients
[client]
password = MAMP_root_password_MAMP
#port = 9999
socket = /Applications/MAMP/tmp/mysql/mysql.sock
# Here follows entries for some specific programs
# The MySQL server
[mysqld]
#port = 9999
socket = /Applications/MAMP/tmp/mysql/mysql.sock
key_buffer_size = 16M
max_allowed_packet = 64M
explicit_defaults_for_timestamp = 1
table_open_cache = 64
MAMP_bind-address_MAMP
# Don't listen on a TCP/IP port at all. This can be a security enhancement,
# if all processes that need to connect to mysqld run on the same host.
# All interaction with mysqld must be made via Unix sockets or named pipes.
# Note that using this option without enabling named pipes on Windows
# (via the "enable-named-pipe" option) will render mysqld useless!
#
MAMP_skip-networking_MAMP
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
secure_file_priv=""
server-id = 1
innodb_buffer_pool_instances = 8
innodb_flush_method = O_DIRECT
sync_binlog = 0
[mysqldump]
password = MAMP_root_password_MAMP
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[isamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[myisamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
myisam_sort_buffer_size = 8M
[mysqlhotcopy]
interactive-timeout
Ade
(123 rep)
May 29, 2019, 06:17 PM
• Last activity: Jul 28, 2025, 02:05 PM
1
votes
2
answers
143
views
MySQL - Flush Memory after loading files
I am loading a great amount of data which are in CSV files using LOAD FILE command. It is working fine and after loading that files, it executes some data transformation... The problem is that, after loading for example 12 files (400 MB each one), mysql process consumes almost all memory available,...
I am loading a great amount of data which are in CSV files using LOAD FILE command. It is working fine and after loading that files, it executes some data transformation...
The problem is that, after loading for example 12 files (400 MB each one), mysql process consumes almost all memory available, it is not flushing, and so, the machine slow down, and I need to restart mysql to continue to load data files.
MySQL 8.0.17
Any ideas how to solve this issue?
- Added Store Procedure
```sql
BEGIN
DECLARE sequenceid INT;
DECLARE
_rollback
BOOL DEFAULT 0;
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION SET _rollback
= 1;
START TRANSACTION;
INSERT INTO filenamedt (filename, filedate, fileinitialid) VALUES (CAfilename, now(), (SELECT MAX(imp_sequence_id) + 1 FROM imp_siscori_sequence));
INSERT INTO imp (select imp, ordem, CONCAT(TRIM(data),'01')) FROM temp_imp);
INSERT INTO imp_data (imp_id, unid_comerc, incoterm) (SELECT imp_id, TRIM(unidadeComercial), TRIM(incoterm), TRIM(natInformacao) FROM temp_imp );
INSERT INTO imp_description (imp_id, imp_description) (SELECT imp_id, TRIM(descricaoProduto) FROM temp_imp);
UPDATE imp_sequence SET imp_sequence_id = (SELECT MAX(imp_id) + 1 FROM temp_imp_siscori);
TRUNCATE TABLE temp_imp;
SET @m = (SELECT MAX(imp_sequence_id) + 1 FROM imp_sequence);
SET @s = CONCAT('ALTER TABLE temp_imp AUTO_INCREMENT=', @m);
PREPARE stmt1 FROM @s;
EXECUTE stmt1;
DEALLOCATE PREPARE stmt1;
IF _rollback
THEN
ROLLBACK;
ELSE
COMMIT;
END IF;
Gustavo_Oliveira
(23 rep)
Jun 5, 2020, 10:52 PM
• Last activity: Jul 27, 2025, 11:04 AM
0
votes
1
answers
2387
views
Database ExecuteSQL(MySQL client run out of memory)
I've an online game the game server execute more than 1M queries [ SELECT , REPLACE , UPDATE ] every 10 sec the problem the memory usage still increasing i dun know why ?? i mean if we suppose that memory usage at the moment is 2000M and it increases to 3000M when the game server execute the queries...
I've an online game
the game server execute more than 1M queries [ SELECT , REPLACE , UPDATE ] every 10 sec
the problem the memory usage still increasing i dun know why ??
i mean if we suppose that memory usage at the moment is 2000M and it increases to 3000M when the game server execute the queries . why don't the memory usage back again to 2000M or around value after excuting ??? .. it stills increasing and increasing
This Problem Makes The Following Error Occur
2016-7-14 16:17:52 -- ERROR: Database ExecuteSQL(MySQL client run out of memory) occur mysql error(SELECT * FROM cq_tutor WHERE tutor_id = 16090848 AND Betrayal_flag = 0).
2016-7-14 16:17:52 -- ERROR: Database ExecuteSQL(Lost connection to MySQL server during query) occur mysql error(SELECT * FROM cq_tutor WHERE Student_id = 16090848 AND Betrayal_flag = 0 LIMIT 1).
btw when i restart the game server the memory usage back to 2000M and still increasing too ..
I want to know the reason of memory usage increasing
i stopped caching of mysql but the same problem occur
mysql version is 4.0.17
the client i use is winmysqladmin 1.4
mysql> SHOW VARIABLES;
+------------------------------+-----------------------------------
| Variable_name | Value |
+------------------------------+-----------------------------------
| back_log | 50
| basedir | E:\ZFSERVER\MYSQL\
| binlog_cache_size | 32768
| bulk_insert_buffer_size | 8388608
| character_set | latin1
| character_sets | latin1 big5 czech euc_kr gb2312 gbk latin1_de sjis tis620 ujis dec8 dos german1 hp8 koi8_ru latin2 swe7 usa7 cp1251 danish hebrew win1251 estonia hungarian koi8_ukr win1251ukr greek win1250 croat cp1257 latin5 |
| concurrent_insert | ON
| connect_timeout | 5
| convert_character_set |
| datadir | E:\ZFSERVER\MYSQL\data\
| default_week_format | 0
| delay_key_write | ON
| delayed_insert_limit | 100
| delayed_insert_timeout | 300
| delayed_queue_size | 1000
| flush | OFF
| flush_time | 1800
| ft_boolean_syntax | + -> SHOW STATUS;
+-------------------------+----------+
| Variable_name | Value |
+-------------------------+----------+
| Aborted_clients | 36 |
| Aborted_connects | 0 |
| Bytes_received | 41352179 |
| Bytes_sent | 84359720 |
| Com_admin_commands | 181 |
| Com_alter_table | 2 |
| Com_analyze | 0 |
| Com_backup_table | 0 |
| Com_begin | 0 |
| Com_change_db | 64 |
| Com_change_master | 0 |
| Com_check | 1 |
| Com_commit | 0 |
| Com_create_db | 0 |
| Com_create_function | 0 |
| Com_create_index | 0 |
| Com_create_table | 252 |
| Com_delete | 70 |
| Com_delete_multi | 0 |
| Com_drop_db | 0 |
| Com_drop_function | 0 |
| Com_drop_index | 0 |
| Com_drop_table | 276 |
| Com_flush | 0 |
| Com_grant | 0 |
| Com_ha_close | 0 |
| Com_ha_open | 0 |
| Com_ha_read | 0 |
| Com_insert | 292241 |
| Com_insert_select | 0 |
| Com_kill | 0 |
| Com_load | 0 |
| Com_load_master_data | 0 |
| Com_load_master_table | 0 |
| Com_lock_tables | 0 |
| Com_optimize | 2 |
| Com_purge | 0 |
| Com_rename_table | 0 |
| Com_repair | 9 |
| Com_replace | 6 |
| Com_replace_select | 0 |
| Com_reset | 0 |
| Com_restore_table | 0 |
| Com_revoke | 0 |
| Com_rollback | 0 |
| Com_savepoint | 0 |
| Com_select | 15736 |
| Com_set_option | 12 |
| Com_show_binlog_events | 0 |
| Com_show_binlogs | 0 |
| Com_show_create | 25 |
| Com_show_databases | 27 |
| Com_show_fields | 2760 |
| Com_show_grants | 0 |
| Com_show_keys | 9 |
| Com_show_logs | 0 |
| Com_show_master_status | 0 |
| Com_show_new_master | 0 |
| Com_show_open_tables | 0 |
| Com_show_processlist | 2 |
| Com_show_slave_hosts | 0 |
| Com_show_slave_status | 0 |
| Com_show_status | 1934 |
| Com_show_innodb_status | 0 |
| Com_show_tables | 57 |
| Com_show_variables | 59 |
| Com_slave_start | 0 |
| Com_slave_stop | 0 |
| Com_truncate | 0 |
| Com_unlock_tables | 0 |
| Com_update | 2138 |
| Connections | 207 |
| Created_tmp_disk_tables | 0 |
| Created_tmp_tables | 0 |
| Created_tmp_files | 0 |
| Delayed_insert_threads | 0 |
| Delayed_writes | 0 |
| Delayed_errors | 0 |
| Flush_commands | 1 |
| Handler_commit | 4 |
| Handler_delete | 1 |
| Handler_read_first | 3363 |
| Handler_read_key | 36141 |
| Handler_read_next | 68771 |
| Handler_read_prev | 0 |
| Handler_read_rnd | 132 |
| Handler_read_rnd_next | 3039359 |
| Handler_rollback | 63 |
| Handler_update | 10 |
| Handler_write | 292202 |
| Key_blocks_used | 2157 |
| Key_read_requests | 459308 |
| Key_reads | 352 |
| Key_write_requests | 197090 |
| Key_writes | 197085 |
| Max_used_connections | 24 |
| Not_flushed_key_blocks | 0 |
| Not_flushed_delayed_rows | 0 |
| Open_tables | 0 |
| Open_files | 0 |
| Open_streams | 0 |
| Opened_tables | 1906 |
| Questions | 315904 |
| Qcache_queries_in_cache | 0 |
| Qcache_inserts | 0 |
| Qcache_hits | 0 |
| Qcache_lowmem_prunes | 0 |
| Qcache_not_cached | 0 |
| Qcache_free_memory | 0 |
| Qcache_free_blocks | 0 |
| Qcache_total_blocks | 0 |
| Rpl_status | NULL |
| Select_full_join | 0 |
| Select_full_range_join | 0 |
| Select_range | 129 |
| Select_range_check | 0 |
| Select_scan | 2691 |
| Slave_open_temp_tables | 0 |
| Slave_running | OFF |
| Slow_launch_threads | 0 |
| Slow_queries | 0 |
| Sort_merge_passes | 0 |
| Sort_range | 22 |
| Sort_rows | 132 |
| Sort_scan | 6 |
| Table_locks_immediate | 311455 |
| Table_locks_waited | 0 |
| Threads_cached | 0 |
| Threads_created | 206 |
| Threads_connected | 4 |
| Threads_running | 1 |
| Uptime | 21423 |
+-------------------------+----------+
-----------------------------------------------
select che from arenaa where mapid=81513 and status=1
select nameb from arenac where mapid=81529
DELETE FROM arenaa where mapid=81529
select nameb from arenac where mapid=81530
select che from arenaa where mapid=81501 and status=2
REPLACE INTO arenac set mapid=\"81502\",nameb=\"%user_name\"
select namea from arenaa where mapid=81505
------------------------------------------------
-- ----------------------------
-- Table structure for arenaa
-- ----------------------------
DROP TABLE IF EXISTS
arenaa
;
CREATE TABLE arenaa
(
mapid
int(100) NOT NULL default '0',
namea
varchar(100) NOT NULL default '0',
nameb
varchar(100) NOT NULL default '0',
status
int(100) NOT NULL default '0',
che
int(100) NOT NULL default '1',
UNIQUE KEY imapid
(mapid
)
) TYPE=MyISAM;
-- ----------------------------
-- Records of arenaa
-- ----------------------------
#############################################
SET FOREIGN_KEY_CHECKS=0;
-- ----------------------------
-- Table structure for arenac
-- ----------------------------
DROP TABLE IF EXISTS arenac
;
CREATE TABLE arenac
(
mapid
int(100) NOT NULL default '0',
nameb
varchar(100) NOT NULL default '0',
UNIQUE KEY imapid
(mapid
)
) TYPE=MyISAM;
-- ----------------------------
-- Records of arenac
-- ----------------------------
thanks for reading ,,
JORDAN MI
(1 rep)
Jul 19, 2016, 01:51 AM
• Last activity: Jul 25, 2025, 05:04 AM
0
votes
1
answers
145
views
Postgresql table function: get where condition
I have defined a function that returns a table. In this function, I use multiple temporary table to pre-elaborate data. Here, there is a very simple example: create or replace function public.fn_world_cities() returns table ( city_id int, city_name varchar, country_id int ) as $$ begin ---- Extract...
I have defined a function that returns a table. In this function, I use multiple temporary table to pre-elaborate data.
Here, there is a very simple example:
create or replace function public.fn_world_cities()
returns table (
city_id int,
city_name varchar,
country_id int
)
as $$
begin
---- Extract temp table with all cities
create temporary table tmp_all_cities on commit drop
as
begin $$
select city_id, city_name, country_id
from public.table_world_cities
where row_status = 'A'; ---- filter active record
---- Return data
return query
select city_id, city_name, country_id
from tmp_all_cities;
end; $$ language plpgsql;
The temporary table creation is time expensive.
When I use the function, always use some
WHERE
condition. Like:
select * from public.fn_world_cities()
where country_id = 10;
In my function, there are many fields on which I can set a WHERE
, so it's impossible to add function parameters for everyone.
Is there a way, at run time, to know there WHERE
condition values? In that way I can pre-filter data during temporary table creation.
I wish I can have a function like this:
create or replace function public.fn_world_cities()
returns table (
city_id int,
city_name varchar,
country_id int
)
as $$
begin
---- Extract temp table with all cities
create temporary table tmp_all_cities on commit drop
as
begin $$
select city_id, city_name, country_id
from public.table_world_cities
where row_status = 'A' ---- filter active record
and ***WHERE CONDITION APPLIED TO FUNCTION CALL***
;
---- Return data
return query
select city_id, city_name, country_id
from tmp_all_cities;
end; $$ language plpgsql;
Radioleao
(153 rep)
Apr 9, 2019, 03:26 PM
• Last activity: Jul 24, 2025, 11:06 PM
0
votes
1
answers
341
views
why processlist list shows limited threads
Actually I have set **max_connection** value set to **500** but processes shows only **266** i.e when I run **show full processlist** it shows only 266 processes not more than this however my connection value is 500. I analyzed every query and found that every query is using single table there is no...
Actually I have set **max_connection** value set to **500** but processes shows only **266** i.e when I run
**show full processlist** it shows only 266 processes not more than this however my connection value is 500.
I analyzed every query and found that every query is using single table there is no query which is using 2 tables.
Is any other settings which I am missing? Please suggest on this.Why it is showing only 266 processes?
Moolya
(1 rep)
May 19, 2021, 08:38 AM
• Last activity: Jul 24, 2025, 03:05 AM
1
votes
2
answers
143
views
Tempdb log growth 3x as slow on new server, SP does not perform
We are running into the issue that our stored procedure on our new SQL Server performs a lot slower than on the old server. We looked into the problem and noticed the following on our temp db. The incremental growth takes 3 times as long on the new server. The same happens for the log file of our da...
We are running into the issue that our stored procedure on our new SQL Server performs a lot slower than on the old server. We looked into the problem and noticed the following on our temp db. The incremental growth takes 3 times as long on the new server. The same happens for the log file of our database. What is causing this?

Esmee
(29 rep)
Mar 19, 2021, 04:16 PM
• Last activity: Jul 23, 2025, 11:06 PM
1
votes
1
answers
735
views
Simple query with a single join very slow
I have this very slow, simple query that joins a large table (~180M rows) with a smaller table (~60k rows) with a foreign key, filtering an indexed column on the smaller table, ordering by the primary key in the larger table, and then taking the 25 latest rows. The `EXPLAIN` shows `Using index; Usin...
I have this very slow, simple query that joins a large table (~180M rows) with a smaller table (~60k rows) with a foreign key, filtering an indexed column on the smaller table, ordering by the primary key in the larger table, and then taking the 25 latest rows.
The
EXPLAIN
shows Using index; Using temporary; Using filesort
on the smaller table. Why?
Engine: MySQL 5.7.
Query:
SELECT
order.id,
order.company_id,
order.total
FROM
order
INNER JOIN
company ON company.id = order.company_id
WHERE
company.company_headquarter_id = 23133
ORDER BY order.id DESC
LIMIT 25;
+----+-------------+------------+------------+------+---------------------------------------+----------------------------+---------+-----------------------+------+----------+----------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+------------+------------+------+---------------------------------------+----------------------------+---------+-----------------------+------+----------+----------------------------------------------+
| 1 | SIMPLE | company | NULL | ref | PRIMARY,company_headquarter_id_idx | company_headquarter_id_idx | 8 | const | 6 | 100.00 | Using index; Using temporary; Using filesort |
| 1 | SIMPLE | order | NULL | ref | company_id_idx | company_id_idx | 8 | company.id | 381 | 100.00 | NULL |
+----+-------------+------------+------------+------+---------------------------------------+----------------------------+---------+-----------------------+------+----------+----------------------------------------------+
CREATE TABLE order
(
id
bigint(20) NOT NULL AUTO_INCREMENT,
company_id
bigint(20) NOT NULL,
total
double(18,2) NOT NULL,
PRIMARY KEY (id
),
KEY company_id_idx
(company_id
),
CONSTRAINT company_id_fk
FOREIGN KEY (company_id
) REFERENCES company
(id
)
) ENGINE=InnoDB AUTO_INCREMENT=186518644 DEFAULT CHARSET=latin1
CREATE TABLE company
(
id
bigint(20) NOT NULL AUTO_INCREMENT,
company_headquarter_id
bigint(20) NOT NULL,
name
varchar(100) NOT NULL,
PRIMARY KEY (id
),
KEY company_headquarter_id_idx
(company_headquarter_id
),
CONSTRAINT company_headquarter_id_fk
FOREIGN KEY (company_headquarter_id
) REFERENCES company_headquarter
(id
)
) ENGINE=InnoDB AUTO_INCREMENT=60825 DEFAULT CHARSET=latin1
CREATE TABLE company_headquarter
(
id
bigint(20) NOT NULL AUTO_INCREMENT,
name
varchar(100) NOT NULL,
phone
varchar(10) DEFAULT NULL,
address_id
bigint(20) NOT NULL,
PRIMARY KEY (id
),
UNIQUE KEY name
(name
),
KEY address_id_idx
(address_id
),
CONSTRAINT address_id_fk
FOREIGN KEY (address_id
) REFERENCES address
(id
)
) ENGINE=InnoDB AUTO_INCREMENT=43862 DEFAULT CHARSET=latin1
CREATE TABLE address
(
id
bigint(20) NOT NULL AUTO_INCREMENT,
street_address
varchar(100) DEFAULT NULL,
zip
varchar(7) DEFAULT NULL,
state
varchar(2) DEFAULT NULL,
city
varchar(50) DEFAULT NULL,
country
varchar(10) DEFAULT NULL,
PRIMARY KEY (id
)
) ENGINE=InnoDB AUTO_INCREMENT=147360955 DEFAULT CHARSET=latin1
The query becomes faster when I:
* Remove the ORDER BY
clause.
* Filter company.company_headquarter_id
with a company_headquarter_id
that has a smaller number of orders. (company_headquarter_id = 23133
has ~3M rows in the order
table)
* Split it into two separate queries:
First:
SELECT
company.id
FROM
company
WHERE
company.company_headquarter_id = 23133;
Second:
SELECT
order.id,
order.company_id,
order.total
FROM
order
WHERE
order.company_id IN (20122, 50729, 50730, 50731, 50732, 50733) /* From first query */
ORDER BY order.id DESC
LIMIT 25;
Any ideas?
Thank you.
EDIT:
When I do:
SELECT STRAIGHT_JOIN
order.id,
order.company_id,
order.total
FROM
order
INNER JOIN
company ON company.id = order.company_id
WHERE
company.company_headquarter_id = 23133
ORDER BY order.id DESC
LIMIT 25;
The query is much faster and EXPLAIN shows a temporary table is not created.
flyingdutchman
(11 rep)
Apr 23, 2022, 05:42 PM
• Last activity: Jul 23, 2025, 04:07 PM
Showing page 1 of 20 total questions