Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

8 votes
1 answers
145 views
Does MySQL/InnoDB use table caches when dealing with foreign tables?
TL;DR: Does a table referred to via `schema.table` get held in table cache? I have a large number of client databases that our application connects to based on an http request. These are used for every part of the application except for access logging. We use a second central schema for recording of...
TL;DR: Does a table referred to via schema.table get held in table cache? I have a large number of client databases that our application connects to based on an http request. These are used for every part of the application except for access logging. We use a second central schema for recording of login attempts across all instances of the application. The login table, however, is accessed via the connection to the local schema as a foreign table. For example: UPDATE central_schema.login SET column = 'value'; Rather than using a second connection directly to the central schema and just dealing with login as a local table: UPDATE login SET column = 'value'; Or switching schemas: USE central_schema; UPDATE login SET column = 'value'; USE client_schema; Given that during busy times I see a _lot_ of simple update or select queries waiting for the table to close is it possible that MySQL doesn't cache tables when referred to cross-schema? Is it possible that, because it's connected to client_schema, it wants to use the client schema cache and thus the foreign table can't be properly cached? And, if I can't get a definitive answer to that, how would I even begin to test what's in the cache and what isn't?
RickMeasham (251 rep)
Aug 16, 2015, 11:56 AM • Last activity: Jul 28, 2025, 10:08 PM
0 votes
1 answers
327 views
Does daily pg_dump mess up postgres cache?
I migrated my geospatial Postgres 12.5 database to another cloud provider. I use postgis and I have around 35GB of data and 8GB of memory. Performances are way worse than on my previous provider, and new provider claims this is because the pg cache has to been "warmed up" everyday after automatic pg...
I migrated my geospatial Postgres 12.5 database to another cloud provider. I use postgis and I have around 35GB of data and 8GB of memory. Performances are way worse than on my previous provider, and new provider claims this is because the pg cache has to been "warmed up" everyday after automatic pg_dump backuping operations occuring in the night. Geospatial queries that would normally take 50ms sometimes take 5-10s on first request, and some that would run in 800ms take minutes. Is there something else looming or is the technical support right ? If so, should I disable daily backups ? Or can I somehow use a utility function to restore the cache ? (pg_prewarm ?)
Pak (101 rep)
Feb 25, 2021, 10:04 AM • Last activity: Apr 27, 2025, 10:04 PM
0 votes
1 answers
65 views
Is pg_basebackup Performance Limited by Files Page Cache?
I have PostgreSQL 15 running in a Kubernetes Pod using the Zalando Spilo image, managed by Patroni. The container has a memory limit of 16 GB, and the database size is 40 GB (data dir on disk). When I reinitialize a replica with Patroni, it runs 'basebackup.sh', which in turn runs the 'pg_basebackup...
I have PostgreSQL 15 running in a Kubernetes Pod using the Zalando Spilo image, managed by Patroni. The container has a memory limit of 16 GB, and the database size is 40 GB (data dir on disk). When I reinitialize a replica with Patroni, it runs 'basebackup.sh', which in turn runs the 'pg_basebackup' command: /usr/lib/postgresql/15/bin/pg_basebackup --pgdata=/home/postgres/pgdata/pgroot/data -X none --dbname='dbname=postgres user=standby host= port=5432' I noticed that the first 16 GB are copied quickly, but the process slows down significantly afterward (I observed that after copied size is approximately equal to container's memory limits). Increasing the container memory limit to 32 GB showed a similar pattern: the first 32 GB were copied quickly, then it slowed down. Running the command manually: /usr/lib/postgresql/15/bin/pg_basebackup --pgdata=/home/postgres/pgdata/pgroot/dummy_dir -X none --dbname='dbname=postgres user=standby host= port=5432' reproduced the issue. Once the container's total memory (including page cache and including inactive files which a lion's part of container MEM) reaches the limit, 'pg_basebackup' slows down. Other disk write operations (e.g., generating and copying large files with 'dd') are not affected by the memory limit and remain fast. When 'pg_basebackup' is slow and the container memory limit is reached, I tried discarding the page cache with: "sync && echo 3 > /proc/sys/vm/drop_caches" and this made 'pg_basebackup' fast again. Is 'pg_basebackup' performance limited by the files page cache? Do you need any additional information from me? Any suggestions? Thanks for your help!
ALZ (171 rep)
Mar 10, 2025, 07:51 AM • Last activity: Apr 9, 2025, 11:43 AM
0 votes
2 answers
1006 views
SSRS Report returning old data
I have created and tested a report in Visual Studio and it returns data from a SQL server database. Once uploaded to SSRS, the report returns old data. What would I need to check as it appears the data source has cached? [![Result in Visual Studio][1]][1] [![Results from SSRS Report][2]][2] [1]: htt...
I have created and tested a report in Visual Studio and it returns data from a SQL server database. Once uploaded to SSRS, the report returns old data. What would I need to check as it appears the data source has cached? Result in Visual Studio Results from SSRS Report Update: I have created a table in the SQL database, and can query it from VS and report builder, but not from a report in SSRS.
Jamesttuk (1 rep)
Mar 20, 2020, 11:12 AM • Last activity: Apr 3, 2025, 01:14 AM
1 votes
1 answers
1008 views
How to check how many keys stored in Aerospike?
I am new to Aerospike and exploring AQL and ASADM. I Executed command "show sets" and got response - ``` aql> show sets +------------------+-------------+---------+-------------------+---------------+-------------------+-------------------+--------------+------------+ | disable-eviction | ns | objec...
I am new to Aerospike and exploring AQL and ASADM. I Executed command "show sets" and got response -
aql> show sets
+------------------+-------------+---------+-------------------+---------------+-------------------+-------------------+--------------+------------+
| disable-eviction | ns          | objects | stop-writes-count | set           | memory_data_bytes | device_data_bytes | truncate_lut | tombstones |
+------------------+-------------+---------+-------------------+---------------+-------------------+-------------------+--------------+------------+
| "false"          | "tvpreprod" | "35249" | "0"               | "data"        | "1684642852"      | "0"               | "0"          | "0"        |
| "false"          | "tvpreprod" | "12229" | "0"               | "epg"         | "3035957260"      | "0"               | "0"          | "0"        |
| "false"          | "tvpreprod" | "6009"  | "0"               | "account"     | "6288324"         | "0"               | "0"          | "0"        |
| "false"          | "tvpreprod" | "24821" | "0"               | "epg_account" | "59593681"        | "0"               | "0"          | "0"        |
| "false"          | "tvstage"   | "2956"  | "0"               | "data"        | "66573412"        | "0"               | "0"          | "0"        |
| "false"          | "tvstage"   | "1873"  | "0"               | "account"     | "1984140"         | "0"               | "0"          | "0"        |
| "false"          | "tvstage"   | "18060" | "0"               | "epg_account" | "30209254"        | "0"               | "0"          | "0"        |
| "false"          | "tvstage"   | "5197"  | "0"               | "epg"         | "1792880530"      | "0"               | "0"          | "0"        |
+------------------+-------------+---------+-------------------+---------------+-------------------+-------------------+--------------+------------+
[xx.xx.xx.xx:3000] 8 rows in set (0.781 secs)
But on executing : select * from tvstage.data, I got result -
138 rows in set (3.796 secs)
I am not able to understand why there is difference in count? **Show sets** gives 2956 records, but **Select** shows only 138 records. All the keys pushed are having different TTLs and expiry is set to not more than 24 hours
Vibhav Singh Rohilla (199 rep)
Aug 25, 2022, 01:06 PM • Last activity: Mar 18, 2025, 01:03 AM
1 votes
2 answers
1759 views
How to cache queries in sqlalchemy just like I do it in mysql by using SQL_CACHE keyword?
I want to cache sqlalchemy, I have discovered that using keyword `SQL_CACHE` in mysql can be helpful in caching queries on demand. But how do I do it in sqlalchemy? Is it possible?
I want to cache sqlalchemy, I have discovered that using keyword SQL_CACHE in mysql can be helpful in caching queries on demand. But how do I do it in sqlalchemy? Is it possible?
Jitin Maherchandani (145 rep)
Feb 12, 2016, 10:57 AM • Last activity: Sep 19, 2024, 06:54 PM
0 votes
1 answers
192 views
Why is PostgreSQL reading from heap and how to improve cache usage?
I have an SQL query that executes slowly on PostgreSQL 13.15 with 128GB of memory. The query mainly performs a Bitmap Heap Scan, and I’ve noticed that many reads are coming from the heap instead of the cache. [Here’s a link to the query and execution plan.][1] An index that frequently appears in the...
I have an SQL query that executes slowly on PostgreSQL 13.15 with 128GB of memory. The query mainly performs a Bitmap Heap Scan, and I’ve noticed that many reads are coming from the heap instead of the cache. Here’s a link to the query and execution plan. An index that frequently appears in the plan looks like this: create index ix_contacts__addresses_value__normalized on contacts__addresses using gin (company_id, field_name, field_id, value__normalized); My memory settings: • shared_buffers = 32GB • work_mem = 64MB • effective_cache_size = 96GB • maintenance_work_mem = 1320MB Questions: 1. Why is the query reading so much from the heap? 2. How can I configure PostgreSQL to better use memory (shared_buffers, work_mem, and other settings) to avoid unnecessary heap reads? 3. Should I consider modifying the indexes to reduce execution time? I would greatly appreciate any advice on optimizing caching and overall performance.
Mykola Shved (13 rep)
Sep 13, 2024, 05:41 PM • Last activity: Sep 17, 2024, 03:46 PM
0 votes
1 answers
77 views
PostgreSQL: Is there any way to differentiate an actual Disk I/O from system cache hit?
When using `EXPLAIN` with `buffers`, the blocks read sometimes are actually from `system cache`, this is understandable sometimes based on the `read time`. But is there any proper way to understand a `system cache-miss` as of version 13?
When using EXPLAIN with buffers, the blocks read sometimes are actually from system cache, this is understandable sometimes based on the read time. But is there any proper way to understand a system cache-miss as of version 13?
goodfella (595 rep)
Sep 10, 2024, 04:06 AM • Last activity: Sep 10, 2024, 09:30 PM
11 votes
1 answers
13517 views
Qcache_free_memory not full yet I get a lot of Qcache_lowmem_prunes
I just started dabbling with the query cache for our CMS. Can anyone tell me (or at least give a good guess) why I get **a lot** of `Qcache_lowmem_prunes` when more than half of `Qcache_free_memory` is free? query_cache_size=512M query_cache_limit=1M This is how it looks after about 12 hours show st...
I just started dabbling with the query cache for our CMS. Can anyone tell me (or at least give a good guess) why I get **a lot** of Qcache_lowmem_prunes when more than half of Qcache_free_memory is free? query_cache_size=512M query_cache_limit=1M This is how it looks after about 12 hours show status like '%qcach%'; +-------------------------+-----------+ | Variable_name | Value | +-------------------------+-----------+ | Qcache_free_blocks | 10338 | | Qcache_free_memory | 297348320 | | Qcache_hits | 10254104 | | Qcache_inserts | 6072945 | | Qcache_lowmem_prunes | 725279 | | Qcache_not_cached | 2237603 | | Qcache_queries_in_cache | 48119 | | Qcache_total_blocks | 111346 | +-------------------------+-----------+ This is how it looked after flush query cache; show status like '%qcach%'; +-------------------------+-----------+ | Variable_name | Value | +-------------------------+-----------+ | Qcache_free_blocks | 1 | | Qcache_free_memory | 443559256 | | Qcache_hits | 10307015 | | Qcache_inserts | 6115890 | | Qcache_lowmem_prunes | 725279 | | Qcache_not_cached | 2249405 | | Qcache_queries_in_cache | 26455 | | Qcache_total_blocks | 54490 | +-------------------------+-----------+
Nifle (1472 rep)
Jan 30, 2013, 07:30 AM • Last activity: May 2, 2024, 11:05 PM
0 votes
1 answers
34 views
Does MySQL invalidate or amend a cached table when rows are added, updated or deleted?
What does MySQL (InnoDB) do when it has cached an entire table into memory and one or more rows in that table changes? Does MySQL in this instance amend the cache in memory, or would it consider the cache dirty, dump the cached table data, and then re-cache the table the next time it is queried?
What does MySQL (InnoDB) do when it has cached an entire table into memory and one or more rows in that table changes? Does MySQL in this instance amend the cache in memory, or would it consider the cache dirty, dump the cached table data, and then re-cache the table the next time it is queried?
TheLegendaryCopyCoder (105 rep)
Apr 25, 2024, 08:17 AM • Last activity: Apr 25, 2024, 03:02 PM
4 votes
2 answers
21630 views
Is it possible to delete cache in Mongo?
My case is that I run several queries in Mongo and measure the response time. I know that Mongo uses Memory Mapped files and cache some data in RAM. If I create an index and run a query the execution time is 2,07 seconds. When I run the qeury again the execution time is 0,017 seconds. This means tha...
My case is that I run several queries in Mongo and measure the response time. I know that Mongo uses Memory Mapped files and cache some data in RAM. If I create an index and run a query the execution time is 2,07 seconds. When I run the qeury again the execution time is 0,017 seconds. This means that the file is mapped into memory by the OS on the first access and the OS will keep it in cache until it decides to page it out. Because I want to make some benchmarks I want somehow to remove the mapped results from memory. free -g: total used free shared buffers cached Mem: 29 29 0 0 0 29 -/+ buffers/cache: 0 29 Swap: 0 0 0 I try db.collection.getPlanCache().clear() but it doesnt work. Then I try: sudo service mongod stop I reboot my machine sync; echo 3 > /proc/sys/vm/drop_caches sudo service mongod start free -g: total used free shared buffers cached Mem: 29 0 0 0 0 0 -/+ buffers/cache: 0 29 Swap: 0 0 0 In above it seems that cache is free now. But when I execute again the query the execution time is 0,017 seconds. It seems that mongo has manage to cache or preserve the data somewhere. The desired execution time would be 2,07 seconds as I clear the cache. Can anyone tell me how I can actually delete the cached results in Mongo?
e7lT2P (175 rep)
May 30, 2018, 11:03 AM • Last activity: Apr 16, 2024, 08:30 PM
0 votes
1 answers
133 views
MongoDB: What cause bytesRead in insert operation?
Recently we had high CPU/Memory and I/O usage on our MongoDB. While checking the logs all I found is some `insert` during this period. While inspecting logs I noticed most of the insert logs have `bytesRead` in the storage section. So I suspect this cause I/O then caching the data cause high memory....
Recently we had high CPU/Memory and I/O usage on our MongoDB. While checking the logs all I found is some insert during this period. While inspecting logs I noticed most of the insert logs have bytesRead in the storage section. So I suspect this cause I/O then caching the data cause high memory. After the insert spike the I/O and CPU went down but memory stayed the same which after a restart got resolved. Is this disk read normal with insert operation? We are using **Mongo v4.0** with WiredTiger storage engine in CentOS7 VM. 2024-02-14T23:39:44.533+0800 I COMMAND [conn939845] insert db.user_log ninserted:1 keysInserted:11 numYields:0 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } storage:{ data: { bytesRead: 34390, timeReadingMicros: 140837 } } 141ms 2024-02-14T23:40:16.785+0800 I COMMAND [conn939845] insert db.user_log ninserted:1 keysInserted:11 numYields:0 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } storage:{ data: { bytesRead: 24150, timeReadingMicros: 506594 } } 507ms
goodfella (595 rep)
Feb 15, 2024, 09:09 AM • Last activity: Feb 23, 2024, 07:29 PM
2 votes
4 answers
89 views
Buffer Pool caching in Sql clarification
I might sound a little naive to ask this question but I couldn’t find out the answer anywhere. So my understanding is that the buffer in a sql db maintains disk pages in cache so that when a query comes in and if the page is available in the cahce the data can be served without going to the disk. No...
I might sound a little naive to ask this question but I couldn’t find out the answer anywhere. So my understanding is that the buffer in a sql db maintains disk pages in cache so that when a query comes in and if the page is available in the cahce the data can be served without going to the disk. Now my question is that does the pages in bufferpool maintains the actual data (complete rows) and directly serves the data or it keeps the offset of the disk and whenever needed directly fetches from the offset. My earlier understanding was the former but when i was reading more I learnt about index organised tables and heap organised tables which made me confused about how it actually does or it could be both.
saurabh singh (21 rep)
Oct 20, 2023, 08:43 PM • Last activity: Oct 21, 2023, 06:41 PM
2 votes
1 answers
1163 views
How to know how much memory is used by each DB user in the cache buffer?
Using this query: ```sql SELECT A.CACHE_BUFFER_TOTAL, B.CACHE_BUFFER_USED, A.CACHE_BUFFER_TOTAL - B.CACHE_BUFFER_USED AS CACHE_BUFFER_FREE FROM (SELECT BYTES / 1024 / 1024 AS CACHE_BUFFER_TOTAL FROM V$SGAINFO WHERE NAME = 'Buffer Cache Size') A, (SELECT (SELECT COUNT(*) FROM V$BH) * (SELECT BLOCK_SI...
Using this query:
SELECT
        A.CACHE_BUFFER_TOTAL,
        B.CACHE_BUFFER_USED,
        A.CACHE_BUFFER_TOTAL - B.CACHE_BUFFER_USED AS CACHE_BUFFER_FREE
    FROM
        (SELECT BYTES / 1024 / 1024 AS CACHE_BUFFER_TOTAL FROM V$SGAINFO WHERE NAME = 'Buffer Cache Size') A,
        (SELECT (SELECT COUNT(*) FROM V$BH) * (SELECT BLOCK_SIZE FROM V$BUFFER_POOL) / 1024 / 1024 AS CACHE_BUFFER_USED FROM DUAL) B;
... I can determine the total size of the buffer cache and the total used; however, I would like to know for each user how much the user's queries are occupying the buffer cache. Is there a simple way to achieve this in Oracle 21C?
Isaac PM (23 rep)
Aug 24, 2023, 05:04 AM • Last activity: Aug 24, 2023, 08:57 AM
1 votes
1 answers
2884 views
What are the pros and cons of using CLIENT_RESULT_CACHE_SIZE and RESULT_CACHE_MODE
I have multiple questions For Oracle Database 11g Release 11.2.0.1.0 what are the benefits of using CLIENT_RESULT_CACHE_SIZE and RESULT_CACHE_MODE. How will the CLIENT side cache be kept sync if data changes at server side. Do we have a AUTO mode for RESULT_CACHE_MODE? What is the recommended mode f...
I have multiple questions For Oracle Database 11g Release 11.2.0.1.0 what are the benefits of using CLIENT_RESULT_CACHE_SIZE and RESULT_CACHE_MODE. How will the CLIENT side cache be kept sync if data changes at server side. Do we have a AUTO mode for RESULT_CACHE_MODE? What is the recommended mode for this? If i execute a single query 10 times with CLIENT_RESULT_CACHE enabled will the query be run 10 times at the server? Please point me towards any documentation regarding this.Thanks in advance
Phani (113 rep)
Oct 19, 2012, 06:09 PM • Last activity: Jun 19, 2023, 09:07 PM
1 votes
1 answers
2693 views
Why the second query is so much faster than the first one?
I have a query that look like this: ```sql SELECT DISTINCT "articles"."id", "articles"."company_uuid", "articles"."status", "articles"."discount_reference", "articles"."created_at" --- more columns FROM "articles" INNER JOIN "customers" ON "customers"."rec" = 'f' AND "customers"."article_id" = "arti...
I have a query that look like this:
SELECT
  DISTINCT "articles"."id",
  "articles"."company_uuid",
  "articles"."status",
  "articles"."discount_reference",
  "articles"."created_at"
  --- more columns
FROM
  "articles"
  INNER JOIN "customers" ON "customers"."rec" = 'f'
  AND "customers"."article_id" = "articles"."id"
WHERE
  "articles"."type" = 'sending_request'
  AND "articles"."hidden" = 'f'
  AND "articles"."discount_reference" = 'Discount/737251623'
  AND "articles"."api_domain" = 'company'
  AND "articles"."status" IN ('completed', 'active')
  AND (customers.search_text ILIKE unaccent('%verb%'))
ORDER BY
  authored_on DESC NULLS LAST
LIMIT
  20 OFFSET 0;
The first query is slow, but when I reran it is always faster. I do not see a clear answer when I compare plans. I have a gin trigram index for the customers search text. https://explain.dalibo.com/plan/b11657f576699fa8 And second run https://explain.dalibo.com/plan/g81h74b9g521g5e7 Is the difference in the IO & Buffers the source of the difference? I am running on PostgreSQL 14 on RDS.
Mio (651 rep)
May 16, 2023, 04:26 PM • Last activity: May 17, 2023, 08:10 AM
1 votes
1 answers
115 views
Which SQL Server component is in charge with checking cached execution plans for a given query?
I understand that whenever SQL server receives a query it looks in the cache plan for any matching execution plans for that query. However, most of the resources that I encountered don't specify which component is in charge of this lookup. [This article][1] suggests that it's the Buffer Manager, but...
I understand that whenever SQL server receives a query it looks in the cache plan for any matching execution plans for that query. However, most of the resources that I encountered don't specify which component is in charge of this lookup. This article suggests that it's the Buffer Manager, but I failed to find any evidence for this in Microsoft's Query processing architecture guide . On the other hand, the speaker in this presentation (around 42:43) seems to suggest that it's the Cmd Parser. So, which one is it?
Mehdi Charife (131 rep)
May 12, 2023, 11:23 AM • Last activity: May 12, 2023, 07:31 PM
0 votes
0 answers
189 views
Disk based Key-Value store to use as LRU cache?
I am looking for a key value store to use as a LRU cache for my application. It needs to hold a lot of data (100G - 1000G) which is why I say disk based. Things I have found via Google seem to be focussed on being in memory. Other requirements: - Automatically evict the least recently accessed keys...
I am looking for a key value store to use as a LRU cache for my application. It needs to hold a lot of data (100G - 1000G) which is why I say disk based. Things I have found via Google seem to be focussed on being in memory. Other requirements: - Automatically evict the least recently accessed keys when full - Highly available (data replicated across 3 nodes, service must work if 2 nodes are up) - 100k writes, 300k reads per hour - Values are binary (1k-100k more or less) - On premise installation on Ubuntu - Low maintenance (this is for a side-project) - Cheap/free (this is for a side-project) - Reliable (reloading the data from source will be very slow, so the cache losing its contents is bad) Currently I have all of the data in Apache Cassandra. I want to move the data to Backblaze and keep only the hot data local. On a cache miss I will fetch from Backblaze and add to the cache. New/updated keys will write to the cache and later to Backblaze (it might be down). Old data is very rarely accessed. Cassandra is working very well but I need to add nodes more frequently than I would like as the cluster fills up (currently have 6). The values stored at Backblaze will contain a collection of approx 20 related keys. Any suggestions? Thanks.
David Tinker (471 rep)
Apr 17, 2023, 08:20 PM
0 votes
2 answers
65 views
Performance for query filtered by user preference
I have an app, that shows data based on user preference. I will simplify it in this theoretical example: **The app shows articles filtered based on user preference for their favorite authors or their favorite topics.** The data will be updated with high frequency since there will be new articles pub...
I have an app, that shows data based on user preference. I will simplify it in this theoretical example: **The app shows articles filtered based on user preference for their favorite authors or their favorite topics.** The data will be updated with high frequency since there will be new articles published any minute. The approach I am going for now: The app makes a request to an endpoint with user favorites in the url params like this: - https://example.com/api/articles/today?=fav_autohrs=1,5,3,7,10&topics=3,6,9 My query to db is something like this:
`
SELECT *
FROM articles_table
WHERE 
author_id IN (1,5,3,7,10) OR topic_id IN(3,6,9)
AND (published_datetime BETWEEN '$date_today 00:00:00' AND '$date_next 00:00:00')
` And I am creating an index for columns: published_datetime & author_id & topic_id. What I want to know if there is any caching mechanism or improvements I can do to this approach to achieve best performance and best utilization of db resources? My database server run on AWS RDS t3.medium that has 2 cpus and 4GB of ram. And I am expecting a high number of requests that reach thousands per minute, so I want to be sure I am solid about my approach and what I need to improve before I publish it to production. I am currently caching http requests for a short ttl, with varnish. But this won't help much in this case since most requests will be unique in their combination of preference. ***My setup:*** * WordPress - as CMS & REST API * Varnish - as caching layer * Mariadb - as database * InnoDB - as table engine **Whats the best approach for my case to allow users to query db based on preference with optimal caching?** **Edit:** I tested my query to see how fast they run and got: * 0.0117s on user first request with preference (unique). * 0.0040s when another request is made to a new date after the first request (preference unchanged, date changed). is this considered good?
Kash (103 rep)
Mar 23, 2023, 07:11 PM • Last activity: Apr 6, 2023, 05:36 AM
43 votes
3 answers
19223 views
Parameter Sniffing vs VARIABLES vs Recompile vs OPTIMIZE FOR UNKNOWN
So we had a long running proc causing problems this morning (30 sec + run time). We decided to check to see if parameter sniffing was to blame. So, we rewrote the proc and set the incoming parameters to variables so as to defeat parameter sniffing. A tried/true approach. Bam, query time improved (le...
So we had a long running proc causing problems this morning (30 sec + run time). We decided to check to see if parameter sniffing was to blame. So, we rewrote the proc and set the incoming parameters to variables so as to defeat parameter sniffing. A tried/true approach. Bam, query time improved (less than 1 sec). When looking at the query plan the improvements were found in an index the original wasn't using. Just to verify that we didn't get a false positive we did a dbcc freeproccache on the original proc and reran to see if the improved results would be the same. But, to our surprise the original proc still ran slow. We tried again with a WITH RECOMPILE, still slow (we tried a recompile on the call to the proc and inside the proc it'self). We even restarted the server (dev box obviously). So, my question is this... how can parameter sniffing be to blame when we get the same slow query on an empty plan cache... there shouldn't be any parameters to snif??? Are we instead being affected by table stats not related to the plan cache. And if so, why would setting the incoming parameters to variables help?? In further testing we also found that inserting the OPTION (OPTIMIZE FOR UNKNOWN) on the internals of the proc **DID** get the expected improved plan. So, some of you folks smarter than I, can you give some clues as to whats going on behind the scenes to produce this type of result? *On another note, the slow plan also get's aborted early with reason GoodEnoughPlanFound while the fast plan has no early abort reason in the actual plan.* In summary - Creating variables out of incoming parameters (1 sec) - with recompile (30+ sec) - dbcc freeproccache (30+ sec) - OPTION (OPTIMIZE FOR UKNOWN) (1 sec) **UPDATE:** See slow execution plan here: https://www.dropbox.com/s/cmx2lrsea8q8mr6/plan_slow.xml See fast execution plan here: https://www.dropbox.com/s/b28x6a01w7dxsed/plan_fast.xml *Note: table, schema, object names changed for security reasons.*
RThomas (3446 rep)
Jan 28, 2013, 10:29 PM • Last activity: Mar 7, 2023, 10:03 AM
Showing page 1 of 20 total questions