Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
2
answers
884
views
MySQL query cache stats - why Qcache_lowmem_prunes is high?
I have the following query cache settings query_cache_type = 1 query_cache_limit = 1M query_cache_size = 32M After running for one full day, I have the following stats mysql> show global status like 'qca%'; +-------------------------+----------+ | Variable_name | Value | +-------------------------+-...
I have the following query cache settings
query_cache_type = 1
query_cache_limit = 1M
query_cache_size = 32M
After running for one full day, I have the following stats
mysql> show global status like 'qca%';
+-------------------------+----------+
| Variable_name | Value |
+-------------------------+----------+
| Qcache_free_blocks | 66 |
| Qcache_free_memory | 30462784 |
| Qcache_hits | 1995904 |
| Qcache_inserts | 2197056 |
| Qcache_lowmem_prunes | 531214 |
| Qcache_not_cached | 40683 |
| Qcache_queries_in_cache | 1429 |
| Qcache_total_blocks | 2946 |
+-------------------------+----------+
8 rows in set (0.00 sec)
The
Qcache_lowmem_prunes
is relatively high, but Qcache_free_memory
is also high, what are the reason? Are there any way to have a high hit rate by tuning the above config?
Yoga
(549 rep)
Jun 17, 2021, 05:29 PM
• Last activity: Jul 22, 2025, 08:09 PM
0
votes
1
answers
282
views
Galera Cluster in read-write split and query cache
I currently have 3 MariaDB 10.4 servers, configured as a Galera cluster. In front of it, a Maxscale router to split writes (1 master) and reads (2 slaves). QC is disabled in all 3 servers. My question is: as the servers 2 and 3 are receiving READS only, is there any (real) benefit to enable QC on th...
I currently have 3 MariaDB 10.4 servers, configured as a Galera cluster. In front of it, a Maxscale router to split writes (1 master) and reads (2 slaves). QC is disabled in all 3 servers.
My question is: as the servers 2 and 3 are receiving READS only, is there any (real) benefit to enable QC on those 2 servers? Like server load decreasing?
I've already tried the Maxscale cache filter out of the box, but some queries have problems with it.
CrazyRabbit
(111 rep)
Jan 13, 2022, 03:12 PM
• Last activity: May 22, 2025, 04:10 PM
2
votes
1
answers
505
views
Query Cache utilisation ~50% Qcache_lowmem_prunes still high
I ran two tests, each running for about a week # Test 1 # ### Query Cache Settings ### - query_cache_limit: **50K** - query_cache_size: **128M** - query_cache_min_res_unit: **1K** ### Results ### - Qcache_free_blocks: **6,004** - Qcache_free_memory: **93,598,504** - Qcache_hits: **377,204,411** - Qc...
I ran two tests, each running for about a week
# Test 1 #
### Query Cache Settings ###
- query_cache_limit: **50K**
- query_cache_size: **128M**
- query_cache_min_res_unit: **1K**
### Results ###
- Qcache_free_blocks: **6,004**
- Qcache_free_memory: **93,598,504**
- Qcache_hits: **377,204,411**
- Qcache_inserts: **13,056,941**
- Qcache_lowmem_prunes: **384,610**
- Qcache_not_cached: **5,794,907**
- Qcache_queries_in_cache: **16,006**
- Qcache_total_blocks: **38,724**
### Results (Calculated Values) ###
- Utilisation: **40.7%**
- Removal Ratio: **2.9%**
- Hit Rate: **95.2%**
- AVG Query Size: **4,012**
- Frag Indicator: **15.5%**
Utilisation = 100-Qcache_free_memory*100/query_cache_size
Removal Ratio = Qcache_lowmem_prunes*100/Qcache_inserts
Hit Rate = Qcache_hits*100/(Qcache_hits+Qcache_inserts+Qcache_not_cached)
AVG Query Size = (query_cache_size - Qcache_free_memory) / Qcache_queries_in_cache
Frag Indicator = Qcache_free_blocks*100/Qcache_total_blocks
Daniel Garcia
(25 rep)
Oct 28, 2017, 08:29 PM
• Last activity: Dec 25, 2024, 08:01 PM
1
votes
2
answers
1759
views
How to cache queries in sqlalchemy just like I do it in mysql by using SQL_CACHE keyword?
I want to cache sqlalchemy, I have discovered that using keyword `SQL_CACHE` in mysql can be helpful in caching queries on demand. But how do I do it in sqlalchemy? Is it possible?
I want to cache sqlalchemy, I have discovered that using keyword
SQL_CACHE
in mysql can be helpful in caching queries on demand.
But how do I do it in sqlalchemy?
Is it possible?
Jitin Maherchandani
(145 rep)
Feb 12, 2016, 10:57 AM
• Last activity: Sep 19, 2024, 06:54 PM
11
votes
1
answers
13517
views
Qcache_free_memory not full yet I get a lot of Qcache_lowmem_prunes
I just started dabbling with the query cache for our CMS. Can anyone tell me (or at least give a good guess) why I get **a lot** of `Qcache_lowmem_prunes` when more than half of `Qcache_free_memory` is free? query_cache_size=512M query_cache_limit=1M This is how it looks after about 12 hours show st...
I just started dabbling with the query cache for our CMS.
Can anyone tell me (or at least give a good guess) why I get **a lot** of
Qcache_lowmem_prunes
when more than half of Qcache_free_memory
is free?
query_cache_size=512M
query_cache_limit=1M
This is how it looks after about 12 hours
show status like '%qcach%';
+-------------------------+-----------+
| Variable_name | Value |
+-------------------------+-----------+
| Qcache_free_blocks | 10338 |
| Qcache_free_memory | 297348320 |
| Qcache_hits | 10254104 |
| Qcache_inserts | 6072945 |
| Qcache_lowmem_prunes | 725279 |
| Qcache_not_cached | 2237603 |
| Qcache_queries_in_cache | 48119 |
| Qcache_total_blocks | 111346 |
+-------------------------+-----------+
This is how it looked after flush query cache
;
show status like '%qcach%';
+-------------------------+-----------+
| Variable_name | Value |
+-------------------------+-----------+
| Qcache_free_blocks | 1 |
| Qcache_free_memory | 443559256 |
| Qcache_hits | 10307015 |
| Qcache_inserts | 6115890 |
| Qcache_lowmem_prunes | 725279 |
| Qcache_not_cached | 2249405 |
| Qcache_queries_in_cache | 26455 |
| Qcache_total_blocks | 54490 |
+-------------------------+-----------+
Nifle
(1472 rep)
Jan 30, 2013, 07:30 AM
• Last activity: May 2, 2024, 11:05 PM
4
votes
2
answers
21630
views
Is it possible to delete cache in Mongo?
My case is that I run several queries in Mongo and measure the response time. I know that Mongo uses Memory Mapped files and cache some data in RAM. If I create an index and run a query the execution time is 2,07 seconds. When I run the qeury again the execution time is 0,017 seconds. This means tha...
My case is that I run several queries in Mongo and measure the response time. I know that Mongo uses Memory Mapped files and cache some data in RAM.
If I create an index and run a query the execution time is 2,07 seconds. When I run the qeury again the execution time is 0,017 seconds.
This means that the file is mapped into memory by the OS on the first access and the OS will keep it in cache until it decides to page it out.
Because I want to make some benchmarks I want somehow to remove the mapped results from memory.
free -g:
total used free shared buffers cached
Mem: 29 29 0 0 0 29
-/+ buffers/cache: 0 29
Swap: 0 0 0
I try
db.collection.getPlanCache().clear()
but it doesnt work.
Then I try:
sudo service mongod stop
I reboot my machine
sync; echo 3 > /proc/sys/vm/drop_caches
sudo service mongod start
free -g:
total used free shared buffers cached
Mem: 29 0 0 0 0 0
-/+ buffers/cache: 0 29
Swap: 0 0 0
In above it seems that cache is free now. But when I execute again the query the execution time is 0,017 seconds. It seems that mongo has manage to cache or preserve the data somewhere. The desired execution time would be 2,07 seconds as I clear the cache.
Can anyone tell me how I can actually delete the cached results in Mongo?
e7lT2P
(175 rep)
May 30, 2018, 11:03 AM
• Last activity: Apr 16, 2024, 08:30 PM
0
votes
0
answers
46
views
Why queries in cache cleaned every 30 mins?
I try to figure out, why my mariadb instance clean queries in cache. It happens every 30 minutes. It doesn't happen by cron or event scheduller. What i'am missing? Some mariadb settings? ``` MariaDB [(none)]> SHOW STATUS LIKE 'Qcache%'; +-------------------------+-----------+ | Variable_name | Value...
I try to figure out, why my mariadb instance clean queries in cache. It happens every 30 minutes. It doesn't happen by cron or event scheduller. What i'am missing? Some mariadb settings?
MariaDB [(none)]> SHOW STATUS LIKE 'Qcache%';
+-------------------------+-----------+
| Variable_name | Value |
+-------------------------+-----------+
| Qcache_free_blocks | 1471 |
| Qcache_free_memory | 219999872 |
| Qcache_hits | 203798549 |
| Qcache_inserts | 65763537 |
| Qcache_lowmem_prunes | 3758579 |
| Qcache_not_cached | 1689178 |
| Qcache_queries_in_cache | 16441 | - flushed every 30 mins
| Qcache_total_blocks | 35341 |
+-------------------------+-----------+
mariadb: mariadb Ver 15.1 Distrib 10.3.38-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
linux: Ubuntu 20.04.6 LTS


Сергей Килин
(1 rep)
Nov 21, 2023, 12:18 PM
1
votes
1
answers
3164
views
All queries waiting for query cache lock
I'm managing a e-commerce site which uses a popular online shopping cart software running on MySQL 5.6. Yesterday I noticed that `SHOW PROCESSLIST` reports that 990 of 1000 queries are waiting for a query cache lock: mysql> show processlist; +----------+------------+---------------------+-----------...
I'm managing a e-commerce site which uses a popular online shopping cart software running on MySQL 5.6. Yesterday I noticed that
SHOW PROCESSLIST
reports that 990 of 1000 queries are waiting for a query cache lock:
mysql> show processlist;
+----------+------------+---------------------+-------------+---------+------+--------------------------------+------------------------------------------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----------+------------+---------------------+-------------+---------+------+--------------------------------+------------------------------------------------------------------------------------------------------+
| 12224065 | sqluser | 10.13.13.13:21716 | mydatabase | Query | 0 | Waiting for query cache lock | SELECT data
FROM mytable WHERE foo
= 'bar' |
(...)
However, the Time is always 0 and the process Ids change all the time. My understanding is that the query waits for a table lock but the lock is released after less than one second.
Is this a normal/acceptable behaviour or could it be worth to do some fine tuning on the query cache, perhaps removing it completely?
dr_
(1334 rep)
Dec 9, 2016, 08:10 AM
• Last activity: Nov 13, 2023, 09:45 PM
10
votes
4
answers
6693
views
What is the best alternative for MySQL Query Cache?
The Query Cache has been removed in MySQL 8. Is there any better alternative for Query Cache in MySQL?
The Query Cache has been removed in MySQL 8. Is there any better alternative for Query Cache in MySQL?
Dipin V A
(109 rep)
Apr 20, 2021, 02:29 PM
• Last activity: Aug 18, 2023, 10:16 AM
3
votes
2
answers
6107
views
AWS RDS Aurora mysql Query Cache
AWS RDS mysql 5.7 Current Query Cache is by default enabled on Aurora mysql. (16GB query cache size, on 300GB of RAM). Am facing some deadlocks from Hangfire and it doesn't show anything to having Query Cache locks. There are 2 blog posts that seem contradictory on the Query Cache feature. https://a...
AWS RDS mysql 5.7
Current Query Cache is by default enabled on Aurora mysql. (16GB query cache size, on 300GB of RAM). Am facing some deadlocks from Hangfire and it doesn't show anything to having Query Cache locks.
There are 2 blog posts that seem contradictory on the Query Cache feature.
https://aws.amazon.com/blogs/database/best-practices-for-configuring-parameters-for-amazon-rds-for-mysql-part-1-parameters-related-to-performance/
https://aws.amazon.com/blogs/database/planning-and-optimizing-amazon-aurora-with-mysql-compatibility-for-consolidated-workloads/
One says enable Query Cache, the other says disable Query Cache. What is the official AWS stance on Query Cache? Is it better to just disable Query Cache in general RDS or not?
Max
(133 rep)
Apr 26, 2022, 05:50 AM
• Last activity: Apr 14, 2023, 04:11 PM
0
votes
2
answers
65
views
Performance for query filtered by user preference
I have an app, that shows data based on user preference. I will simplify it in this theoretical example: **The app shows articles filtered based on user preference for their favorite authors or their favorite topics.** The data will be updated with high frequency since there will be new articles pub...
I have an app, that shows data based on user preference. I will simplify it in this theoretical example:
**The app shows articles filtered based on user preference for their favorite authors or their favorite topics.** The data will be updated with high frequency since there will be new articles published any minute.
The approach I am going for now: The app makes a request to an endpoint with user favorites in the url params like this:
-
https://example.com/api/articles/today?=fav_autohrs=1,5,3,7,10&topics=3,6,9
My query to db is something like this:
`
SELECT *
FROM articles_table
WHERE
author_id IN (1,5,3,7,10) OR topic_id IN(3,6,9)
AND (published_datetime BETWEEN '$date_today 00:00:00' AND '$date_next 00:00:00')
`
And I am creating an index for columns: published_datetime & author_id & topic_id.
What I want to know if there is any caching mechanism or improvements I can do to this approach to achieve best performance and best utilization of db resources?
My database server run on AWS RDS t3.medium that has 2 cpus and 4GB of ram. And I am expecting a high number of requests that reach thousands per minute, so I want to be sure I am solid about my approach and what I need to improve before I publish it to production.
I am currently caching http requests for a short ttl, with varnish. But this won't help much in this case since most requests will be unique in their combination of preference.
***My setup:***
* WordPress - as CMS & REST API
* Varnish - as caching layer
* Mariadb - as database
* InnoDB - as table engine
**Whats the best approach for my case to allow users to query db based on preference with optimal caching?**
**Edit:**
I tested my query to see how fast they run and got:
* 0.0117s on user first request with preference (unique).
* 0.0040s when another request is made to a new date after the first request (preference unchanged, date changed).
is this considered good?
Kash
(103 rep)
Mar 23, 2023, 07:11 PM
• Last activity: Apr 6, 2023, 05:36 AM
32
votes
3
answers
42263
views
Why did MySQL remove the query cache feature after version 8.0?
Why did MySQL remove the query cache feature after version 8.0?
Why did MySQL remove the query cache feature after version 8.0?
shaoyihe
(477 rep)
Sep 14, 2018, 01:38 AM
• Last activity: Sep 26, 2022, 06:18 AM
0
votes
1
answers
711
views
MariaDB Maxscale caching not working
I have setup Maxscale (v6.2) and have connected to the Galera Cluster (3 nodes - MariaDB 10.5). I am trying to use the cache filter but it seems to not work. I have enabled general log for all the nodes and whenever I run the query I can see the queries are served from the nodes instead from the Max...
I have setup Maxscale (v6.2) and have connected to the Galera Cluster (3 nodes - MariaDB 10.5). I am trying to use the cache filter but it seems to not work. I have enabled general log for all the nodes and whenever I run the query I can see the queries are served from the nodes instead from the Maxscale cache.
Also I noticed that when I use mysqlslap with concurrency 10 I find that the general log file in each node shows 10 times connected and the actual query hit is 3. When I do similar operation using Haproxy then the general log shows 3 times connected and hit also 3. Not sure if there is anything to be setup properly for Maxscale.
Here is my maxscale.cnf
# MaxScale documentation:
# https://mariadb.com/kb/en/mariadb-maxscale-25/
# Global parameters
#
# Complete list of configuration options:
# https://mariadb.com/kb/en/mariadb-maxscale-25-mariadb-maxscale-configuration-guide/
[maxscale]
threads=auto
log_info=true
# Server definitions
#
# Set the address of the server to the network
# address of a MariaDB server.
#
[server1]
type=server
address=1.1.1.1
port=3306
protocol=MariaDBBackend
[server2]
type=server
address=1.1.1.2
port=3306
protocol=MariaDBBackend
[server3]
type=server
address=1.1.1.3
port=3306
protocol=MariaDBBackend
# Monitor for the servers
#
# This will keep MaxScale aware of the state of the servers.
# MariaDB Monitor documentation:
# https://mariadb.com/kb/en/maxscale-25-monitors/
[Galera-Monitor]
type=monitor
module=galeramon
servers=server1,server2,server3
user=maxscale
password=XXXXXXXX
monitor_interval=2000
#Galera router service
[Galera-Service]
type=service
router=readwritesplit
servers=server1,server2,server3
user=maxscale
password=XXXXXXXX
lazy_connect=true
#Galera cluster listener
[Galera-Listener]
type=listener
service=Galera-Service
protocol=MariaDBClient
address=0.0.0.0
port=3306
#cache
[Cache]
type=filter
module=cache
storage=storage_inmemory
soft_ttl=300s
hard_ttl=600s
cached_data=shared
Below image shows the maxscale log where the query is routed to the server instead of being served from cache.
Suraj
(103 rep)
Jan 17, 2022, 03:32 PM
• Last activity: Jan 31, 2022, 05:19 PM
1
votes
0
answers
368
views
How to prioritize indices in Postgres cache?
The **server config**: - RAM = 16 GB - #CPUs = 6 The **Postgres config**: - There is only 1 Postgres instance. I.e. no partioning, no clustering, ... - shared_buffers = 5GB - work_mem = 10MB - maintenance_work_mem = 800MB - wal_buffers = 16MB - effective_cache_size = 10GB My users are experiencing b...
The **server config**:
- RAM = 16 GB
- #CPUs = 6
The **Postgres config**:
- There is only 1 Postgres instance. I.e. no partioning, no clustering, ...
- shared_buffers = 5GB
- work_mem = 10MB
- maintenance_work_mem = 800MB
- wal_buffers = 16MB
- effective_cache_size = 10GB
My users are experiencing bad user experience while querying Postgres (v13) due to bad query performance. An index exists and is used. However, there are many READs and not so many HITs while the index is used. I.e. larger parts of the index need to be loaded from disc into the cache. Immediately executing the same query again leads to expected results: The relevant parts of the index were loaded into the cache due to the first query and are now available leading to no READs at all with a performance much less than 1 sec, while the first query took approx. 120 sec.
The reason why Postgres does not keep my index in cache also makes sense: There are other indices which are used much more often and thus "push out" of the cache the not so often used indices. In fact only 10% of the cache is used for indices, the remaining 90% for parts of the actual main table. There are many background jobs querying the Postgres instance which are using other indices quite frequently. The indices which are used when users interact with the system (on a web frontend via an API) are much more seldom used - which is leading to really bad user experience as described above.
**Is there a way/configuration/best_practice_architecture which somehow tackles this problem by telling Postgres to keep specific indices in cache all the time?**
I also have the feeling that the load on the Postgres instance might lead to the very bad performance where READs on the disc take "ages".
Thanks upfront to everyone considering this question!
**EDIT:**
This is an examplary query. In fact there are 9 indices (all on the same table offers) all having the same issue.
Table size: ~ 24,000,000 rows / 36GB
Indices size: 22 indices / 31GB
Table:
CREATE TABLE IF NOT EXISTS offers (
pk_offers serial not null,
crawler_id varchar not null,
source varchar not null,
fk_objects int null,
masteroffer boolean not null default true,
fk_offers_parent bigint null,
id_source varchar null,
offer_type varchar null,
object_type varchar null,
deal_type varchar not null,
imgs jsonb null,
geoCoordLat double precision null,
geoCoordLon double precision null,
offerorType varchar null,
offerorName varchar null,
offerorDetails varchar null,
offerorData jsonb null,
fkOfferorsCompanies bigint null,
street varchar null,
nr varchar null,
zip_code varchar null,
city varchar null,
country varchar null,
construction_year int null,
renovation_year int null,
object_condition varchar null,
construction_phase varchar null,
heating_type varchar null,
interior_niveau varchar null,
main_energy_source varchar null,
object_type_spec varchar null,
nr_rooms double precision null,
nrLevelOfObject int null,
nrlevelofobject_text varchar null,
nr_parking_space int null,
parking_space boolean null,
sqm_living double precision null,
sqm_use double precision null,
sqm_estate double precision null,
nr_levels int null,
currency varchar not null,
energy_consumption double precision null,
erbpacht boolean null,
boardinghouse boolean null,
moebliert boolean null,
mehrfamilien boolean null,
dachgeschoss boolean null,
penthouse boolean null,
kitchen_builtin boolean null,
elevator boolean null,
senior_friendly boolean null,
balcony boolean null,
terrace boolean null,
cellar boolean null,
garden boolean null,
rented_out boolean null,
monumental_protection boolean null,
new_building boolean null,
nrSeatsGuestroom int null,
heightRooms double precision null,
location varchar null,
housemoney double precision null,
selling_price double precision null,
selling_price_per_sqm_living double precision null,
minZipPricePerSqm double precision null,
lowQuartileZipPricePerSqm double precision null,
medianZipPricePerSqm double precision null,
highQuartileZipPricePerSqm double precision null,
maxZipPricePerSqm double precision null,
countZipPricePerSqm int null,
selling_price_parking_space double precision null,
selling_rent_per_month double precision null,
selling_rent_per_month_estimated double precision null,
selling_rent_per_month_per_sqm_living double precision null,
selling_rent_per_month_per_sqm_living_estimated double precision null,
grossMargin double precision null,
grossMarginEstimated double precision null,
grossMarginMerged double precision null,
taxRealEstateTransfer double precision null,
netMargin double precision null,
netMarginEstimated double precision null,
netNetMargin double precision null,
netNetMarginEstimated double precision null,
provision varchar null,
provision_rel double precision null,
provision_abs double precision null,
provision_vatincl boolean null,
provisionParsed boolean not null default false,
provision_exists boolean null,
zwangsversteigerung boolean null,
bieterverfahren boolean null,
renovierungsbeduerftig boolean null,
rentDuration varchar null,
net_cold_rent_per_month double precision null,
net_cold_rent_per_month_per_sqm_living double precision null,
minZipRentPerSqm double precision null,
lowQuartileZipRentPerSqm double precision null,
medianZipRentPerSqm double precision null,
highQuartileZipRentPerSqm double precision null,
maxZipRentPerSqm double precision null,
countZipRentPerSqm int null,
priceChangeRelInLast6Months double precision null,
rentChangeRelInLast6Months double precision null,
net_utilities_per_month double precision null,
net_utilities_per_month_and_sqm double precision null,
net_warm_rent_per_month double precision null,
net_parkingspace_per_month double precision null,
heating_costs_handling varchar null,
deposit double precision null,
deposit_text varchar null,
netColdRentPerYear double precision null,
building_exists boolean null,
building_land boolean null,
exploitation varchar null,
construction_following varchar null,
suggested_use varchar null,
grz double precision null,
gfz double precision null,
date_available_as_of varchar null,
area_separable_as_of varchar null,
additional_attributes_json varchar null,
count_offers_last_90_days int null,
population_zip_area int null,
zip_area_km2 double precision null,
population_per_km2 double precision null,
email_was_sent boolean not null default false,
html_extracted boolean not null default false,
htmlExtractionFailed boolean not null default false,
timestamp_html_extracted timestamptz default null,
calculationsDone boolean not null default false,
offerActive boolean not null default true,
outlier boolean not null default false,
outdatedoffer boolean not null default false,
is_duplicate boolean not null default false,
modernisation_score int null,
manually_added boolean not null default false,
inserted_by varchar null,
changed_cols_vs_parent jsonb null,
timestamp_available_since timestamptz default current_timestamp,
timestamp_last_active_check timestamptz default null,
timestamp_last_appearance timestamptz default current_timestamp,
timestamp_deactivated timestamptz default null,
timestamp_creation timestamptz default current_timestamp,
timestamp_modified timestamptz default current_timestamp,
PRIMARY KEY (pk_offers),
FOREIGN KEY (fk_objects) REFERENCES objects (pk_objects) ON DELETE NO ACTION,
FOREIGN KEY (fk_offers_parent) REFERENCES offers (pk_offers) ON DELETE NO ACTION, NOT YET IMPLEMENTED
FOREIGN KEY (fkOfferorsCompanies) REFERENCES offerors_companies (pk_offerors_companies) ON DELETE NO ACTION
);
Query:
EXPLAIN (ANALYZE, BUFFERS, COSTS, BUFFERS, TIMING)
SELECT id, fk_objects
FROM offers_valid_renames
WHERE true
AND deal_type IN ('LIVING_APARTMENT_BUY')
AND country='DE'
AND offerActive=true
order by selling_price_per_sqm_living asc limit 2000
Used index:
CREATE INDEX IF NOT EXISTS offers__identify_sortby_sellingpricepersqmliving ON offers (deal_type, country, selling_price_per_sqm_living)
Query plan:
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.56..54777.59 rows=2000 width=16) (actual time=37.753..132250.683 rows=2000 loops=1)
Buffers: shared hit=19364 read=26722 dirtied=12 written=39
-> Index Scan using offers__identify_sortby_sellingpricepersqmliving on offers (cost=0.56..5331010.53 rows=194644 width=16) (actual time=37.750..132248.798 rows=2000 loops=1)
Index Cond: (((deal_type)::text = 'LIVING_APARTMENT_BUY'::text) AND ((country)::text = 'DE'::text))
Filter: offeractive
Rows Removed by Filter: 47727
Buffers: shared hit=19364 read=26722 dirtied=12 written=39
Planning Time: 1.188 ms
Execution Time: 132254.758 ms
glowfish
(11 rep)
Jul 20, 2021, 08:39 AM
• Last activity: Jul 20, 2021, 12:01 PM
2
votes
2
answers
8833
views
How to get the size of the result of a SQL query?
How do you get the size (in bytes) of the result of a SQL query, in MySQL 5.6? `EXPLAIN` will return the number of fetched rows, but not their size. My aim is to evaluate common queries to know a lower bound value for `query_cache_limit` so that they can be served by the Query Cache.
How do you get the size (in bytes) of the result of a SQL query, in MySQL 5.6?
EXPLAIN
will return the number of fetched rows, but not their size.
My aim is to evaluate common queries to know a lower bound value for query_cache_limit
so that they can be served by the Query Cache.
dr_
(1334 rep)
Nov 25, 2016, 12:10 PM
• Last activity: Mar 19, 2021, 10:04 PM
1
votes
0
answers
1797
views
MariaDB 10.3.27 - 100% Temporary tables created on disk despite/because of InnoDB?
I am using Mariadb 10.3 with InnoDB with a fairly high load. For some reason, I have 100% of my temporary tables created on disk, despite using Innodb and a large tmp_table_size (2G) (see Mysqltuner.pl and server config output). Is this behavior because of Innodb and just a display bug, or does the...
I am using Mariadb 10.3 with InnoDB with a fairly high load. For some reason, I have 100% of my temporary tables created on disk, despite using Innodb and a large tmp_table_size (2G) (see Mysqltuner.pl and server config output).
Is this behavior because of Innodb and just a display bug, or does the server really create files on my disk (which would explain the huge utilization of the disk)?
Also, enabling query cache reduces the CPU load, despite its low utilization according to mysqltuner. Do you see a similar behavior using Mariadb?
Output of mysqltuner.pl:
Currently running supported MySQL version 10.3.27-MariaDB-log
[OK] Operating on 64-bit architecture
-------- Storage Engine Statistics -----------------------------------------------------------------
[--] Status: +Aria +CSV +InnoDB +MEMORY +MRG_MyISAM +MyISAM +OQGRAPH +PERFORMANCE_SCHEMA +SEQUENCE +TokuDB
[--] Data in InnoDB tables: 8.6G (Tables: 3968)
[OK] Total fragmented tables: 0
-------- Analysis Performance Metrics --------------------------------------------------------------
[--] innodb_stats_on_metadata: OFF
[OK] No stat updates during querying INFORMATION_SCHEMA.
-------- Performance Metrics -----------------------------------------------------------------------
[--] Up for: 1h 1m 53s (4M q [1K qps], 65K conn, TX: 42G, RX: 1G)
[--] Reads / Writes: 98% / 2%
[--] Binary logging is enabled (GTID MODE: OFF)
[--] Physical Memory : 251.9G
[--] Max MySQL memory : 82.3G
[--] Other process memory: 0B
[--] Total buffers: 23.1G global + 120.3M per thread (500 max threads)
[--] P_S Max memory usage: 500M
[--] Galera GCache Max memory usage: 0B
[OK] Maximum reached memory usage: 35.4G (14.04% of installed RAM)
[OK] Maximum possible memory usage: 82.3G (32.70% of installed RAM)
[OK] Overall possible memory usage with other process is compatible with memory available
[OK] Slow queries: 0% (786/4M)
[OK] Highest usage of available connections: 20% (100/500)
[OK] Aborted connections: 0.01% (4/65835)
[!!] name resolution is active : a reverse name resolution is made for each new connection and can reduce performance
[!!] Query cache may be disabled by default due to mutex contention.
[!!] Query cache efficiency: 0.0% (0 cached / 3M selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 0% (4K temp sorts / 791K sorts)
[!!] Joins performed without indexes: 212
[!!] Temporary tables created on disk: 100% (327K on disk / 327K total)
[OK] Thread cache hit rate: 99% (209 created / 65K connections)
[OK] Table cache hit rate: 99% (5K open / 5K opened)
[OK] table_definition_cache(5000) is upper than number of tables(4134)
[OK] Open file limit used: 0% (72/1M)
[OK] Table locks acquired immediately: 100% (29 immediate / 29 locks)
[OK] Binlog cache memory access: 99.93% (52259 Memory / 52298 Total)
-------- Performance schema ------------------------------------------------------------------------
[--] Memory used by P_S: 500.2M
[--] Sys schema isn't installed.
-------- ThreadPool Metrics ------------------------------------------------------------------------
[--] ThreadPool stat is enabled.
[--] Thread Pool Size: 40 thread(s).
[--] Using default value is good enough for your version (10.3.27-MariaDB-log)
-------- MyISAM Metrics ----------------------------------------------------------------------------
[!!] Key buffer used: 18.2% (97M used / 536M cache)
[OK] Key buffer size / total MyISAM indexes: 512.0M/148.0K
[OK] Read Key buffer hit rate: 95.6% (180 cached / 8 reads)
-------- InnoDB Metrics ----------------------------------------------------------------------------
[--] InnoDB is enabled.
[--] InnoDB Thread Concurrency: 0
[OK] InnoDB File per table is activated
[OK] InnoDB buffer pool / data size: 20.0G/8.6G
[OK] Ratio InnoDB log file size / InnoDB Buffer pool size: 3.0G * 2/20.0G should be equal to 25%
[OK] InnoDB buffer pool instances: 20
[--] Number of InnoDB Buffer Pool Chunk : 160 for 20 Buffer Pool Instance(s)
[OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances
[OK] InnoDB Read buffer efficiency: 99.99% (3615446911 hits/ 3615706982 total)
[!!] InnoDB Write Log efficiency: 66.63% (64938 hits/ 97466 total)
[OK] InnoDB log waits: 0.00% (0 waits / 32528 writes)
-------- AriaDB Metrics ----------------------------------------------------------------------------
[--] AriaDB is enabled.
[OK] Aria pagecache size / total Aria indexes: 128.0M/1B
[OK] Aria pagecache hit rate: 100.0% (1B cached / 272K reads)
-------- TokuDB Metrics ----------------------------------------------------------------------------
[--] TokuDB is enabled.
-------- XtraDB Metrics ----------------------------------------------------------------------------
[--] XtraDB is disabled.
-------- Galera Metrics ----------------------------------------------------------------------------
[--] Galera is disabled.
My config:
tmpdir = /tmp
default_storage_engine = InnoDB
binlog_ignore_db = phpmyadmin
binlog_cache_size = 49152 #(32768)
relay_log_space_limit = 10737418240
expire_logs_days = 2
sync_master_info = 10000
performance_schema = ON
big_tables = ON
slave_parallel_threads = 10
slave_net_timeout = 3600
slave_skip_errors = ddl_exist_errors
slave_transaction_retries = 13
innodb_stats_on_metadata = 0
#innodb_write_io_threads = 12 #(24)
#innodb_read_io_threads = 8 #(24)
innodb_buffer_pool_size = 20G #10737418240
innodb_buffer_pool_instances = 20
innodb_autoinc_lock_mode = 2
innodb_log_buffer_size = 512M
innodb_sort_buffer_size = 512M
innodb_log_file_size = 3G
max_heap_table_size = 2048M
max_connections = 500 #8192 #24576
max_connect_errors = 4294967295
max_binlog_size = 4147483647
max_relay_log_size = 1073741824
max_statement_time = 60
tmp_table_size = 2048M
thread_cache_size = 64
query_cache_type = 1 #1
query_cache_limit = 4096
query_cache_size = 4096
open_files_limit = 1024000
table_open_cache = 65000
table_definition_cache = 5000
key_buffer_size = 512M
# Per-thread Buffers
sort-buffer-size = 32M
read-buffer-size = 8M
read-rnd-buffer-size = 32M
join-buffer-size = 32M
slow_query_log = ON
long_query_time = 3
#skip-name-resolve = 1
[mysqld]
#init-connect = 'SET NAMES utf8mb4'
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci
[galera]
[embedded]
[mariadb]
[mariadb-10.1]
Thank you very much in advance and I wish you a happy new year :-)
Edit: Thank you for the feedback! I already added my own table to cache related posts queries (which were the slow ones) for my wordpress sites. Here the requested additional information:
- [MySQLTuner Report](https://pastebin.com/nJf4gTuX)
- [SHOW GLOBAL STATUS](https://pastebin.com/nMXUMCCy)
Hardware:
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz
Main Memory (256 GB):
Mem: 257894 177885 1083 2399 78925 75804
Disk:
Disk /dev/nvme0n1: 1200.2 GB
Disk /dev/nvme1n1: 1200.2 GG
Ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1031022
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 4096
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
iostat -xm (will update in the evening, when traffic is high)
Linux 4.19.62-mod-std-ipv6-64-rescue 01/02/2021 _x86_64_ (40 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
56.23 0.01 1.50 0.02 0.00 42.24
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
loop0 0.00 0.00 0.00 0.00 0.00 0.00 39.95 0.00 0.03 0.03 0.00 0.01 0.00
nvme0n1 0.01 76.34 11.48 135.84 0.28 2.45 37.88 0.01 0.13 0.33 0.11 0.52 7.63
nvme1n1 0.03 77.41 11.41 132.46 0.27 2.18 34.88 0.02 0.13 0.33 0.11 0.55 7.87
md4 0.00 0.00 22.83 432.94 0.55 4.62 23.23 0.00 0.00 0.00 0.00 0.00 0.00
md2 0.00 0.00 0.00 0.00 0.00 0.00 148.20 0.00 0.00 0.00 0.00 0.00 0.00
Htop has more than 800 threads (mostly filled from php-fpm which has up to 400 allowed children). Mysql requires actually less CPU(mostly around 130%, spikes to 1300%), even though I thought WordPress is DB heavy and not PHP heavy
SHOW FULL PROCESSLIST almost always only contains just sleeping activities from WordPress sites (I'm not sure to post it here as some friends reveal too much personal information in their table names)
Php-FPM Config
listen.allowed_clients = 127.0.0.1
listen.owner = nginx
listen.group = nginx
listen.mode = 0660
user = nginx
group = nginx
pm = ondemand
pm.max_children = 400
pm.process_idle_timeout = 6s
pm.max_requests = 20002
request_slowlog_timeout = 15s
clear_env = 1
rlimit_files = 162144
catch_workers_output = on
php_admin_value[memory_limit] = 256M
;php_admin_value[memory_limit] = 512M
php_value[session.save_handler] = files
php_value[soap.wsdl_cache_dir] = /var/lib/php/wsdlcache
php_value[opcache.file_cache] = /var/lib/php/opcache
php_value[session.save_path] = /var/lib/php/session
php_value[upload_max_filesize] = 1024M
php_value[post_max_size] = 1024M
php_value[max_execution_time] = 30s
request_terminate_timeout = 45s
php_value[opcache.enable] = 0
P S
(11 rep)
Dec 28, 2020, 08:15 AM
• Last activity: Jan 2, 2021, 01:02 PM
1
votes
1
answers
165
views
Sybase IQ cache database result
I'm using a query that calculates some values on a table with about 11 millions rows. And I need to display the results in real time (on my site), but this calculations need about 1min to execute. The table content changes each 30 mins, so I don't have to recalc the results at each time user reloads...
I'm using a query that calculates some values on a table with about 11 millions rows. And I need to display the results in real time (on my site), but this calculations need about 1min to execute. The table content changes each 30 mins, so I don't have to recalc the results at each time user reloads the page. How can I cache the results of calculations? Via php (I use
odbc
) or using some sql statement, some sybase IQ option. Thanks.
Daria
(153 rep)
Mar 17, 2015, 12:26 PM
• Last activity: Nov 26, 2020, 04:03 AM
0
votes
1
answers
632
views
How to determine actual memory occupied by Query Cache in MySQL?
How can we check if the MySQL server is utilizing high memory/RAM ? How much memory is occupied in total by query cache ? How much memory is fragmented ? Basically what is occupying xGB off memory by MySQL server ? **Currently running MySQL Server version 5.7.25.**
How can we check if the MySQL server is utilizing high memory/RAM ?
How much memory is occupied in total by query cache ? How much memory is fragmented ?
Basically what is occupying xGB off memory by MySQL server ?
**Currently running MySQL Server version 5.7.25.**
Prateek Pande
(157 rep)
Jul 27, 2020, 02:21 PM
• Last activity: Jul 28, 2020, 06:02 AM
0
votes
1
answers
982
views
Is it possible to get the name of a procedure by it's plan handle or query plan hash if the execution plan is already cleared from the plan cache?
I ran a home-brew trace for the length of 1 day that captured a few things that were running. The trace includes the specific query text, the parent entity text of that query, the Plan Handle, and the Query Plan Hash. Usually I can use RedGate's SQL Search on the query text to find the entity I'm lo...
I ran a home-brew trace for the length of 1 day that captured a few things that were running. The trace includes the specific query text, the parent entity text of that query, the Plan Handle, and the Query Plan Hash.
Usually I can use RedGate's SQL Search on the query text to find the entity I'm looking for but that was a no-go in this case.
Unfortunately that was a day ago when the trace ran, and when I check the sys.dm_exec_cached_plans DMV, nothing is turning up for my plan handle.
Are there any other ways I can backtrace this query?
Update: Here's how the Query Store is currently configured on the database in question. Note sure if it can be useful for my case?

J.D.
(40893 rep)
Feb 7, 2020, 03:16 PM
• Last activity: Feb 7, 2020, 04:25 PM
0
votes
0
answers
1259
views
Subquery caching in join
I have very complex MySQL queries that use the same subquery over and over again. I was wondering if MySQL will run the subquery multiple times or use the previous resultset. Is there any way to tell MySQL to save or cache the results? My query is as follows: SELECT pc.categoryId, COUNT(pc.productId...
I have very complex MySQL queries that use the same subquery over and over again. I was wondering if MySQL will run the subquery multiple times or use the previous resultset. Is there any way to tell MySQL to save or cache the results?
My query is as follows:
SELECT
pc.categoryId,
COUNT(pc.productId)
FROM
product_categories pc
INNER JOIN
(
SELECT
t1.productId AS s1
FROM
product_categories t1
JOIN
product_highlights t2
ON
t1.productId = t2.productId
WHERE
t1.categoryId IN(7, 5) AND t2.highlightId IN(9, 10, 11)
GROUP BY
t1.productId
HAVING
2 * 3 = COUNT(
DISTINCT t1.categoryId,
t2.highlightId
)
) productsIds
ON
pc.productId = productsIds.s1
GROUP BY
pc.categoryId
**UPDATE:**
Here is EXPLAIN query:
+----+-------------+------------+------------+-------+---------------+---------+---------+------+------+----------+-----------------------------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+------------+------------+-------+---------------+---------+---------+------+------+----------+-----------------------------------------------------------------+
| 1 | PRIMARY | | NULL | ALL | NULL | NULL | NULL | NULL | 4 | 100.00 | Using temporary; Using filesort |
| 1 | PRIMARY | pc | NULL | index | PRIMARY | PRIMARY | 4 | NULL | 5 | 20.00 | Using where; Using index; Using join buffer (Block Nested Loop) |
| 2 | DERIVED | t1 | NULL | index | PRIMARY | PRIMARY | 4 | NULL | 5 | 80.00 | Using where; Using index; Using temporary; Using filesort |
| 2 | DERIVED | t2 | NULL | index | PRIMARY | PRIMARY | 4 | NULL | 7 | 14.29 | Using where; Using index; Using join buffer (Block Nested Loop) |
+----+-------------+------------+------------+-------+---------------+---------+---------+------+------+----------+-----------------------------------------------------------------+
4 rows in set, 1 warning (0.00 sec)
SQL Fiddle of my database.
**UPDATE 2:**
Temporary table query:
CREATE TEMPORARY TABLE IF NOT EXISTS tempProducts
(
tmpProductId SMALLINT NOT NULL,
PRIMARY KEY(tmpProductId)
) ENGINE = MEMORY
SELECT
t1.productId AS tmpProductId
FROM
product_categories t1
JOIN product_highlights t2 ON
t1.productId = t2.productId
WHERE
t1.categoryId IN(7, 5) AND t2.highlightId IN(9, 10, 11)
GROUP BY
t1.productId
HAVING
2 * 3 = COUNT(
DISTINCT t1.categoryId,
t2.highlightId
);
Temp table will look like:
+--------------+
| tmpProductId |
+--------------+
| 1 |
| 2 |
+--------------+
> show index from tempProducts where Key_name = 'PRIMARY'
+--------------+------------+----------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+--------------+------------+----------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| tempproducts | 0 | PRIMARY | 1 | tmpProductId | NULL | 2 | NULL | NULL | | HASH | | |
+--------------+------------+----------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
**UPDATE 3:**
Query with temporary table:
SELECT
categoryId,
COUNT(productId)
FROM
product_categories tb1
INNER JOIN tempProducts tb2 ON
tb1.productId = tb2.tmpProductId
GROUP BY
categoryId;
I added composite UNIQUE
index to product_categories
table:
UNIQUE(productId, categoryId)
EXPLAIN
query:
+------+-------------+-------+--------+-----------------------------+---------+---------+------------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+--------+-----------------------------+---------+---------+------------------+------+-------------+
| 1 | SIMPLE | tb1 | index | productId_categoryId_unique | PRIMARY | 4 | NULL | 5 | Using index |
| 1 | SIMPLE | tb2 | eq_ref | PRIMARY | PRIMARY | 2 | ss.tb1.productId | 1 | |
+------+-------------+-------+--------+-----------------------------+---------+---------+------------------+------+-------------+
Shahin
(65 rep)
Jul 24, 2019, 07:58 AM
• Last activity: Jul 27, 2019, 08:15 AM
Showing page 1 of 20 total questions