Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
2
answers
57
views
How do I update Java prepared statements from Postgres 12 to Postgres 16?
I have a Java/JSP webapp that uses a Postgres database. I recently migrated the webapp from one server running Ubuntu 20.04 and Postgres 12 to another server running Ubuntu 24.04 and Postgres 16. I've found that the webapp now generates a Postgres-related error ``` org.postgresql.util.PSQLException:...
I have a Java/JSP webapp that uses a Postgres database. I recently migrated the webapp from one server running Ubuntu 20.04 and Postgres 12 to another server running Ubuntu 24.04 and Postgres 16.
I've found that the webapp now generates a Postgres-related error
org.postgresql.util.PSQLException: ERROR: trailing junk after parameter at or near "$5WHERE"
with a reference to one of the Java class files in the webapp. That file generates an SQL query using Java's prepared statements to execute on the Postgres database. The exact string "$5WHERE" appears nowhere in my source code, but of course there are several instances of the SQL keyword " WHERE " (surrounded by spaces) in the file in question.
**Question 1:** Where does the "$5" come from in the error message "$5WHERE"? I can't find any reference to this online, and it might be a clue.
The closest example I can find online to the error message as a whole is this [StackExchange](https://stackoverflow.com/questions/75482094/postgresql-15-trailing-junk-after-numeric-literal-error) question from 2023. The question cites the recent transition to Postgres 15 and its [release notes](https://www.postgresql.org/docs/15/release-15.html) that states one change is
> Prevent numeric literals from having non-numeric trailing characters (Peter Eisentraut) §
> Previously, query text like 123abc would be interpreted as 123 followed by a separate token abc.
I'm not certain this is related to my error. The only numeric literal data I have in the problem code is a primary key, and I'm sure it doesn't have trailing characters.
I've pulled all of the code I can tell is relevant from the Java class cited in the error and cleaned it up as best I can:
import java.sql.Connection;
import java.sql.PreparedStatement;
public void updateUser(WorkGroup group, String password) {
PreparedStatement ps = null, ps2 = null;
Connection conn = DBManager.getConnection();
conn.setAutoCommit(false);
// Whether the user's password is being updated
boolean pass = false;
String sql = "UPDATE user_record SET join = ?, role = ?, sales_training = ?, sec_training = ?, field_training = ?";
if (StringUtils.isNotBlank(password)) {
sql += ", hashedpassword = ? ";
pass = true;
}
sql += "WHERE id = ?;";
ps = conn.prepareStatement(sql);
ps.setString(1, group.getJoinYear());
ps.setString(2, group.getRole());
ps.setBoolean(3, group.getSalesTraining());
ps.setBoolean(4, group.getSecTraining());
ps.setBoolean(5, group.getFieldTraining());
if (pass) {
String hashedPassword = BCrypt.hashpw(password, BCrypt.gensalt(12));
ps.setString(6, hashedPassword);
ps.setInt(7, group.getId());
}
else {
ps.setInt(6, group.getId());
}
ps.executeUpdate();
}
I want to stress that this code has worked for years, so there can't be anything fundamentally wrong with it. My top suspicion is that this manner of preparing the SQL statement conflicts with a change in one of the Postgres major versions between 12 and 16, but I can't tell what because the statement preparation obscures so much.
**Question 2:** Does this code need updating to remain current with Postgres, and what needs to be updated?
Borea Deitz
(151 rep)
Jun 3, 2025, 08:26 PM
• Last activity: Jun 4, 2025, 09:29 AM
0
votes
1
answers
288
views
Use quoted schema in an insert statement executed thorough PreparedStatement in Java
I created an unquoted table `my_table` under a quoted schema `"Quoted_User"`. When executing the statement INSERT INTO "Quoted_User".my_table (custom_id) VALUES (DEFAULT) using the Intellij Oracle browser it works, but when I'm performing the same statement through a PreparedStatement in Java Oracle...
I created an unquoted table
my_table
under a quoted schema "Quoted_User"
.
When executing the statement
INSERT INTO "Quoted_User".my_table (custom_id) VALUES (DEFAULT)
using the Intellij Oracle browser it works, but when I'm performing the same statement through a PreparedStatement in Java Oracle is complaining that the schema is not existing. Here's the prepared statement Java line:
statement = connection.prepareStatement(insertSQL, new String[]{custom_id});
and here is insertSQL
as shown in the inspector:
INSERT INTO \"Quoted_User\".my_table (custom_id) VALUES (DEFAULT)
It's like Oracle is removing the quotes, and I can say this because I noticed the uppercase letters in the error message which are always returned when the object is considered to be case insensitive.
Error returned by Oracle:
java.sql.SQLSyntaxErrorException: ORA-04043: object "QUOTED_USER" does not exist
There should be a way to use quoted schema names with prepared statements I guess.
Thanks for your hints,
Mattia
Mattia Moretta
(3 rep)
Aug 24, 2019, 11:28 AM
• Last activity: May 10, 2025, 08:03 PM
1
votes
1
answers
552
views
Can you use PREPARE statement or plpgsql function to make first query on new connection fast?
Currently, I am standing up an RShiny (web) application that queries data from an AWS RDS PostgresSQL database (PostgresSQL 12) which is pushed forward to my application. When a new user logs into the website, they establish a new connection to our database (important). I have spent a lot of time op...
Currently, I am standing up an RShiny (web) application that queries data from an AWS RDS PostgresSQL database (PostgresSQL 12) which is pushed forward to my application. When a new user logs into the website, they establish a new connection to our database (important). I have spent a lot of time optimizing our database structure underneath the website to improve query performance ranging from indexing, reading and optimizing query plans, re-organizing table structures etc.. Unfortunately, I am not achieving the success I need. What I have found is when a query is performed, it is often slow because there is no cached information about the optimal plan. So, when the query is run the first time the data retrieval is slow. However, the second time it is **much quicker** / runs as I have optimized it.
Recently, I have been exploring using
PREPARE
statements with much better success. However, I am running into issues where:
1. The cached PREPARE
statement is not accessible to another user
2. The cached PREPARE
statement doesn't persist over multiple connections
My understanding is PREPARE
statements only exist over the lifetime of the connection, so these findings aren't surprising, but leave me back at my original problem. I could have a series of PREPARE
statements executed when the user logs in for the first time on the website such that these are available, but that doesn't seem sustainable to me.
Is there a way to have query plans (ideally a PREPARE
statement of plpgsql
function) cached over any connection such that the first time a user logs into the website and queries data using the optimized plan on the first attempt? I keep running into this issue where the first query on a new connection is always (frustratingly) slow.
Second, from a higher level is there a better approach here to optimizing website querying performance where the data sits on top of a PostgresSQL database?
**EDIT / UPDATE**
This is the current performance I am receiving between the first and second run. First, using a plpgsql
function:
Run 1: QUERY PLAN
Planning Time: 0.084 ms
Execution Time: 1411.143 ms
Run 2: QUERY PLAN
Planning Time: 0.028 ms
Execution Time: 116.966 ms
When I use a PREPARE
statement:
Run 1: QUERY PLAN
Planning Time: 26.579 ms
Execution Time: 232.347 ms
Run 2: QUERY PLAN
Planning Time: 16.594 ms
Execution Time: 77.716 ms
The difference between the two approach is likely due to some of the tables already being in memory as J.D. mentioned, which can be extended to the behavior between the 1st and 2nd query run.
Ideally, this is run at the high end speed I am seeing on the 2nd runs < 100ms in total time between execution and planning.
ctpickard157
(11 rep)
Dec 7, 2021, 12:55 AM
• Last activity: Apr 25, 2025, 06:03 PM
0
votes
1
answers
42
views
Memory Fragmentation in Primary MariaDB 10.6.18 with ProxySQL 2.5.5
Hello Community Members, We have recently migrated our DB server to Azure from Onprem, and we are struggling with memory fragmentation in MariaDB Database. DB is consuming more than 2 times of buffer pool memory allocated. We see this fragmentation only on primary master. This behaviour is only obse...
Hello Community Members,
We have recently migrated our DB server to Azure from Onprem, and we are struggling with memory fragmentation in MariaDB Database. DB is consuming more than 2 times of buffer pool memory allocated. We see this fragmentation only on primary master. This behaviour is only observed when app connectivity via Proxysql to MariaDB. For sure the culprit is proxysql.
We have one master with 2 standby databases in DB topology and 4 proxysql nodes one being hostgroup 1 and rest 3 with hostgroup 2. We have enabled the multiplexing, We initiall have heavy memory fragmentation, which got limited after changing the following variables in proxysql :
During Migration :
--------------------------
mysql-max_stmts_per_connection = 100; and max_prepared_stmt_count = 500000
Memory fragmentation was quite huge and we have to perform switch over the master database and bounce the DB post sswitchover to release the memory occupied by MariaDB.
Post Migration :
--------------------
mysql-max_stmts_per_connection = 50; at proxysql and max_prepared_stmt_count = 250000, at mariadb this has slowed down the fragmentation, but still mariadb consuming the memory based on the prepared statements.
ProxySQL doesn't automatically close prepared statements unless the client explicitly requests it ?
Connection Multiplexing in ProxySQL may be preventing statements from being properly closed.
The application might not be explicitly closing prepared statements, relying on ProxySQL to handle cleanup.
If multiplexing is enabled, ProxySQL keeps connections open and doesn't send COM_STMT_CLOSE properly. In our case, we have enabled multiplexing. No Query rules in place.
sh-4.4$ cat /var/lib/mysql/my.cnf |grep buffer_pool;free -g;top |grep mysqld
innodb_buffer_pool_size = 96G
total used free shared buff/cache available
Mem: 660 378 269 0 13 279
Swap: 0 0 0
1 mysql 20 0 386.8g 374.2g 28312 S 313.3 56.6 13770:26 mysqld\
1 mysql 20 0 386.8g 374.2g 28312 S 320.0 56.6 13770:36 mysqld\
1 mysql 20 0 386.8g 374.2g 28312 S 288.7 56.6 13770:44 mysqld\
1 mysql 20 0 386.8g 374.2g 28312 S 236.0 56.6 13770:51 mysqld\
1 mysql 20 0 386.8g 374.2g 28312 S 336.3 56.6 13771:02 mysqld\
1 mysql 20 0 386.8g 374.2g 28312 S 321.9 56.6 13771:11 mysqld\
1 mysql 20 0 386.8g 374.2g 28312 S 309.3 56.6 13771:21 mysqld\
1 mysql 20 0 386.8g 374.2g 28312 S 298.0 56.6 13771:30 mysqld\
1 mysql 20 0 386.8g 374.2g 28312 S 279.7 56.6 13771:38 mysqld\
1 mysql 20 0 386.8g 374.2g 28312 S 152.0 56.6 13771:42 mysqld\
1 mysql 20 0 386.8g 374.2g 28312 S 172.1 56.6 13771:48 mysqld\
1 mysql 20 0 386.8g 374.2g 28312 S 238.0 56.6 13771:55 mysqld\
MariaDB [(none)]> SHOW GLOBAL STATUS LIKE 'Memory_used';\
+---------------+--------------+\
| Variable_name | Value |\
+---------------+--------------+\
| Memory_used | 117572547280 |\
+---------------+--------------+\
1 row in set (0.001 sec)
MariaDB [(none)]> select 117572547280/1024/1024/1024 'Memory_used in GB';\
+-------------------+\
| Memory_used in GB |\
+-------------------+\
| 109.497967436910 |\
+-------------------+\
1 row in set (0.000 sec)\
MariaDB [(none)]>\
MariaDB [(none)]> SELECT
-> NOW() AS Query_Time,\
->
-> -- Temporary Tables Statistics\
-> (SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Created_tmp_tables') AS Tmp_Tables_Created,
-> (SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Created_tmp_disk_tables') AS Tmp_Tables_Disk,
-> (SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Created_tmp_memory_tables') AS Tmp_Tables_Memory,
->
-> -- Open Tables & Table Cache
-> (SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Open_tables') AS Open_Tables,
-> (SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Opened_tables') AS Opened_Tables,
-> (SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_VARIABLES WHERE VARIABLE_NAME = 'table_open_cache') AS Table_Open_Cache,
-> (SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_VARIABLES WHERE VARIABLE_NAME = 'table_definition_cache') AS Table_Definition_Cache,
->
-> -- Prepared Statements Memory Usage
-> (SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Prepared_stmt_count') AS Active_Prepared_Statements,
-> (SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Com_prepare_sql') AS Total_Prepares,
-> (SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Com_deallocate_sql') AS Total_Deallocates,
->
-> -- Memory Consumed by Prepared Statements (Estimation)
-> ROUND((SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Prepared_stmt_count') * 512 / 1024, 2) AS Estimated_Memory_MB
-> ;
+---------------------+--------------------+-----------------+-------------------+---------\----+---------------+------------------+------------------------+----------------------------+----------------+-------------------+---------------------+\
| Query_Time | Tmp_Tables_Created | Tmp_Tables_Disk | Tmp_Tables_Memory | Open_Tables | Opened_Tables | Table_Open_Cache | Table_Definition_Cache | Active_Prepared_Statements | Total_Prepares | Total_Deallocates | Estimated_Memory_MB |
+---------------------+--------------------+-----------------+-------------------+---------\----+---------------+------------------+------------------------+----------------------------+----------------+-------------------+---------------------+\
| 2025-02-25 08:49:26 | 17772934 | 1985203 | NULL | 400 | 39233 | 400 | 400 | 240131 | 0 | NULL | 120065.50 |\
+---------------------+--------------------+-----------------+-------------------+-------------+---------------+------------------+------------------------+----------------------------+----------------+-------------------+---------------------+\
1 row in set (0.003 sec)
MariaDB [(none)]>
Proxysql :
----------------
mysql> SELECT srv_host, srv_port, ConnUsed, ConnFree, ConnOK, Queries,
-> ROUND(Queries / NULLIF(ConnOK, 0), 2) AS MultiplexingEfficiencyRatio
-> FROM stats_mysql_connection_pool
-> ORDER BY MultiplexingEfficiencyRatio ASC
-> LIMIT 10;
+-------------------------------------------------------------------------+----------+----------+----------+--------+-----------+-----------------------------+\
| srv_host | srv_port | ConnUsed | ConnFree | ConnOK | Queries | MultiplexingEfficiencyRatio |\
+-------------------------------------------------------------------------+----------+----------+----------+--------+-----------+-----------------------------+\
| 2c_oltp.mariadb | 3052 | 2 | 4 | 1555 | 55156669 | 35470.0 \ |
| 2b_oltp.mariadb | 3052 | 2 | 6 | 1933 | 70648384 | 36548.0 \ |
| 2a_oltp.mariadb | 3052 | 43 | 4 | 4099 | 150018800 | 36598.0 \ |
| 2d_oltp.mariadb | 3052 | 0 | 4 | 2080 | 77161687 | 37096.0 |
+-------------------------------------------------------------------------+----------+----------+----------+--------+-----------+-----------------------------+\
4 rows in set (0.00 sec)
mysql> SELECT * FROM stats_mysql_connection_pool;\
+-----------+-------------------------------------------------------------------------+----\------+--------+----------+----------+--------+---------+-------------+-----------+-------\------------
| hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | MaxConnUsed | Queries | Queries_GTID_sync | Bytes_data_sent | Bytes_data_recv | Latency_us |\
+-----------+-------------------------------------------------------------------------+----\------+--------+----------+----------+--------+---------+-------------+-----------+-------\------------+
| 1 | 2a_oltp.mariadb | 3052 | ONLINE | 45 | 0 | 4097 | 0 | 62 | 149975306 | 0 | 48544471112 | 187097398415 | 270 |
| 2 | 2c_oltp.mariadb | 3052 | ONLINE | 2 | 4 | 1555 | 0 | 10 | 55133002 | 0 | 32800700913 | 20589949783 | 421 |
| 2 | 2b_oltp.mariadb | 3052 | ONLINE | 2 | 6 | 1933 | 0 | 14 | 70624781 | 0 | 41584296177 | 27213051703 | 272 |
| 2 | 2d_oltp.mariadb | 3052 | ONLINE | 0 | 3 | 2079 | 0 | 13 | 77137911 | 0 | 45755921665 | 31073609951 | 168 |
+-----------+-------------------------------------------------------------------------+----------+--------+----------+----------+--------+---------+-------------+-----------+-------------------+
4 rows in set (0.00 sec)
mysql> SELECT * FROM stats_mysql_global WHERE variable_name LIKE '%stmt%';
+---------------------------+----------------+\
| Variable_Name | Variable_Value |\
+---------------------------+----------------+\
| Com_backend_stmt_prepare | 63914669 |\
| Com_backend_stmt_execute | 233922211 |\
| Com_backend_stmt_close | 0 |\
| Com_frontend_stmt_prepare | 233883162 |\
| Com_frontend_stmt_execute | 233923781 |\
| Com_frontend_stmt_close | 233876578 |\
| Stmt_Client_Active_Total | 3 |\
| Stmt_Client_Active_Unique | 2 |\
| Stmt_Server_Active_Total | 93421 |\
| Stmt_Server_Active_Unique | 77596 |\
| Stmt_Max_Stmt_id | 174002 |\
| Stmt_Cached | 82713 |\
+---------------------------+----------------+\
12 rows in set (0.01 sec)
mysql> show variables like "%stmt%";
+--------------------------------+-------+\
| Variable_name | Value |\
+--------------------------------+-------+\
| mysql-max_stmts_cache | 10000 |\
| mysql-max_stmts_per_connection | 50 |\
+--------------------------------+-------+\
2 rows in set (0.00 sec)
mysql>
MariaDB [(none)]> show status like 'Prepared_stmt_count';\
+---------------------+--------+\
| Variable_name | Value |\
+---------------------+--------+\
| Prepared_stmt_count | 220071 |\
+---------------------+--------+\
1 row in set (0.000 sec)
MariaDB [(none)]>\
mysql> SELECT\
-> username, schemaname, count(*)\
-> FROM
-> stats_mysql_prepared_statements_info\
-> GROUP BY\
-> 1, 2\
-> ORDER BY\
-> 3 DESC;\
+----------+------------+----------+\
| username | schemaname | count(*) |\
+----------+------------+----------+\
| app | appprd | 87573 |\
+----------+------------+----------+\
1 row in set (0.29 sec)\
mysql>
Will proxysql disable multiplexing for all queries that have @ in their query_digest will disable multiplexing ??? We are using 2.5.5 version of proxysql and maridb 10.6.18.
How we can stabilize this memory fragmentation permanently without any significant impact on performance?
Gopinath Karangula
(933 rep)
Feb 27, 2025, 11:49 AM
• Last activity: Apr 11, 2025, 08:24 AM
0
votes
2
answers
794
views
Parameterized Query without Prepared Statements and PgBouncer
my app is written in Go and I use PgBouncer as a connection pool to my Postgres Database. One downside of PgBouncer, in transaction mode, is that I can't use prepared statements. So I have 2 options, using session mode (which is bad) or to disable prepared statements from clients. Is it safe (SQL In...
my app is written in Go and I use PgBouncer as a connection pool to my Postgres Database. One downside of PgBouncer, in transaction mode, is that I can't use prepared statements.
So I have 2 options, using session mode (which is bad) or to disable prepared statements from clients.
Is it safe (SQL Injection) to not use prepared statements and use only Parameterized Query (eg.
(Select item from products where id = ? , itemID)
)
Bill
(29 rep)
Jun 5, 2022, 11:44 AM
• Last activity: Nov 27, 2024, 01:00 AM
2
votes
1
answers
555
views
Error: type of parameter n (X) does not match that when preparing the plan (Y)
I'm encountering an issue with a PL/pgSQL function where it returns different data types based on a condition, and it's resulting in a type mismatch error. Here's a simplified version of the function: ```sql CREATE OR REPLACE FUNCTION public.test(lever int) RETURNS VARCHAR LANGUAGE plpgsql AS $funct...
I'm encountering an issue with a PL/pgSQL function where it returns different data types based on a condition, and it's resulting in a type mismatch error. Here's a simplified version of the function:
CREATE OR REPLACE FUNCTION public.test(lever int)
RETURNS VARCHAR
LANGUAGE plpgsql
AS $function$
DECLARE
_new_record RECORD;
BEGIN
-- Evaluating the random condition and returning different strings
IF (lever = 0) THEN
SELECT * FROM
(VALUES(uuid_generate_anura()))x(id)
INTO _new_record;
ELSE
SELECT * FROM
(VALUES(10))x(id)
INTO _new_record;
END IF;
RETURN pg_typeof(_new_record.id)::varchar;
END;
$function$;
Note that this is a very simplified version of my complete function. I'm interested in understanding why and how I can work around it.
When calling this function with lever = 0
, it correctly returns the text uuid
. However, when calling it with lever = 1
to force the ELSE
statement to execute, it throws an error:
postgres=# select test(0);
test
------
uuid
(1 row)
postgres=# select test(1);
ERROR: type of parameter 4 (integer) does not match that when preparing the plan (uuid)
CONTEXT: PL/pgSQL function test(integer) line 15 at RETURN
It doesn't matter the data type, the ELSE
block will always fail.
Roberto Iglesias
(47 rep)
Mar 13, 2024, 08:03 PM
• Last activity: Mar 15, 2024, 12:13 AM
-1
votes
1
answers
256
views
Embedding PHP variable into Sqlite statement
I have been struggling with this for hours. I am pretty new to Sqlite and have been trying to write a prepared statement, everything works up until I try to get my variable in. The following works as does a direct `SELECT` - entering the string straight in the statement. prepare('SELECT * FROM Users...
I have been struggling with this for hours. I am pretty new to Sqlite and have been trying to write a prepared statement, everything works up until I try to get my variable in.
The following works as does a direct
SELECT
- entering the string straight in the statement.
prepare('SELECT * FROM Users WHERE Name = ?');
$statement->bindValue(1, 'aName');
$result = $statement->execute();
echo("Get the 1st row as an associative array:\n");
print_r($result->fetchArray(SQLITE3_ASSOC));
echo("\n");
$result->finalize();
However as soon as I bring a variable in to play with bindParam
, I do not get results.
like so:
prepare('SELECT * FROM Users WHERE Name = ?');
$statement->bindParam('s', $myusername);
$result = $statement->execute();
echo("Get the 1st row as an associative array:\n");
print_r($result->fetchArray(SQLITE3_ASSOC));
echo("\n");
$result->finalize();
I have checked the $_POST['user'] variable and it is pulling through. Any help that solves my issue or shows how I might debug would be most welcome.
I am working in PHP Storm just in case there is something specific to that IDE.
I have also tried with Name = :Name
and binding with that.
Thanks in advance.
fcreav
(99 rep)
Mar 2, 2024, 01:38 AM
• Last activity: Mar 3, 2024, 12:25 AM
4
votes
2
answers
2860
views
Is it possible to obtain number of prepared statements a connection holds?
I have a heavy loaded application that holds connections persistently (it creates them on start and never releases). During the lifetime it creates and (re)uses prepared statements, which are stored in an LRU structure (so the least used are eventually released). And the question is: is it possible...
I have a heavy loaded application that holds connections persistently (it creates them on start and never releases).
During the lifetime it creates and (re)uses prepared statements, which are stored in an LRU structure (so the least used are eventually released).
And the question is: is it possible to obtain the number and other statistics of prepared statements for a given connection?
There is nothing relevant on http://www.postgresql.org/docs/9.3/static/monitoring-stats.html so not sure if there are other interfaces to access postgresql stats.
**UPD**: I probably was not clear enough: I need to obtain information using the normal DBA connection, since I cannot run arbitrary queries from the production software. For that very reason the
pg_prepared_statements
solution would not work.
zerkms
(143 rep)
Nov 27, 2015, 03:55 AM
• Last activity: Feb 4, 2024, 04:52 PM
2
votes
1
answers
99
views
MySQL statement to update repeating events
I need to create a single MySQL statement for speeding up the process of updating weekly repeating events in a database table I have. Right now, the user is adding records to the table. Some records are for single date events and some records are events that repeat every 7 days within a defined star...
I need to create a single MySQL statement for speeding up the process of updating weekly repeating events in a database table I have.
Right now, the user is adding records to the table. Some records are for single date events and some records are events that repeat every 7 days within a defined start date and end date. Both types of records are entered into the same table.
Each event has the start date and an end date of the event date entered in two fields: 'date_start' and 'date_end'. For a single date event the user enters the same date in 'date_start' and 'date_end'. For a repeating event, they are entering the date of the first event occurence in 'date_start' and date of the last date of the weekly occurence in 'date_end'. Repeating events are marked as 'Weekly_Event' as shown below in the sample data.
I need to take this table and make multiple copies of all records marked 'Weekly_Event'. One copy of the original record for each week between the start and end date.
In the example below a record has the start date set as May 3 and end date as June 7, I need a SQL statement that creates 6 new records from that record: May 3, May 10, May 17, May 24, May 31, June 7. There is also another 'Weekly_Event' and a 'Single_Day' event a sample.
MySQL version: 8.0.36
Table Type: InnoDB
Collation: latin1_swedish_ci
CREATE TABLE original_shows_24
(
id
int NOT NULL AUTO_INCREMENT,
status
int NOT NULL DEFAULT '1',
date_start
date NOT NULL,
date_end
date NOT NULL,
name
varchar(200) NOT NULL,
details
mediumtext NOT NULL,
url
varchar(300) CHARACTER SET latin1 COLLATE latin1_swedish_ci NOT NULL DEFAULT 'https://',
slug
varchar(128) NOT NULL,
category
varchar(300) NOT NULL,
PRIMARY KEY (id
),
KEY slug
(slug
)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO original_shows_24
(id
, status
, date_start
, date_end
, name
, details
, url
, slug
, category
) VALUES
(1, 1, '2024-05-03', '2024-06-07', 'Weekly Event Name', 'Details about this event', 'https://link ', 'weekly-event', 'Weekly_Event'),
(2, 1, '2024-05-09', '2024-08-18', 'Another Weekly Event', 'Details about this other weekly event that starts May 9 and goes every week until Aug 18', 'https://link ', 'another-weekly-event', 'Weekly_Event'),
(3, 1, '2024-05-02', '2024-05-02', 'A regular Single Day Event', 'Details about this event ', 'https://link ', 'a-single-day-event', 'Single_Day');
After the operation, the records would mostly look like the below sample. I want the outputted date_start and date_end of the Weekly_Events equal because each of the newly generated events happen within their assigned single day.
CREATE TABLE updated_shows_24
(
id
int NOT NULL AUTO_INCREMENT,
status
int NOT NULL DEFAULT '1',
date_start
date NOT NULL,
date_end
date NOT NULL,
name
varchar(200) NOT NULL,
details
mediumtext NOT NULL,
url
varchar(300) CHARACTER SET latin1 COLLATE latin1_swedish_ci NOT NULL DEFAULT 'https://',
slug
varchar(128) NOT NULL,
category
varchar(300) NOT NULL,
PRIMARY KEY (id
),
KEY slug
(slug
)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO updated_shows_24
(id
, status
, date_start
, date_end
, name
, details
, url
, slug
, category
) VALUES
(1, 0, '2024-05-03', '2024-06-07', 'Weekly Event Name', 'Details about this event', 'https://link ', 'weekly-event', 'Weekly_Event'),
(2, 0, '2024-05-09', '2024-08-18', 'Another Weekly Event', 'Details about this other weekly event that starts May 9 and goes every week until Aug 18', 'https://link ', 'another-weekly-event', 'Weekly_Event'),
(3, 1, '2024-05-02', '2024-05-02', 'A regular Single Day Event', 'Details about this event ', 'https://link ', 'a-single-day-event', 'Single_Day'),
(101, 1, '2024-05-03', '2024-05-03', 'Weekly Event Name', 'Details about this event', 'https://link ', 'weekly-event-2024-05-03', 'Weekly_Event'),
(102, 1, '2024-05-10', '2024-05-10', 'Weekly Event Name', 'Details about this event', 'https://link ', 'weekly-event-2024-05-10', 'Weekly_Event'),
(103, 1, '2024-05-17', '2024-05-17', 'Weekly Event Name', 'Details about this event', 'https://link ', 'weekly-event-2024-05-17', 'Weekly_Event'),
(104, 1, '2024-05-24', '2024-05-24', 'Weekly Event Name', 'Details about this event', 'https://link ', 'weekly-event-2024-05-24', 'Weekly_Event'),
(105, 1, '2024-05-31', '2024-05-31', 'Weekly Event Name', 'Details about this event', 'https://link ', 'weekly-event-2024-05-31', 'Weekly_Event'),
(106, 1, '2024-06-07', '2024-06-07', 'Weekly Event Name', 'Details about this event', 'https://link ', 'weekly-event-2024-06-07', 'Weekly_Event'),
(107, 1, '2024-05-09', '2024-05-09', 'Another Weekly Event', 'Details about this other weekly event that starts May 9 and goes every week until Aug 18', 'https://link ', 'another-weekly-event-2024-05-09', 'Weekly_Event'),
(108, 1, '2024-05-16', '2024-05-16', 'Another Weekly Event', 'Details about this other weekly event that starts May 9 and goes every week until Aug 18', 'https://link ', 'another-weekly-event-2024-05-16', 'Weekly_Event');
The 'id' for the new events should be unique and ideally just be auto generated based on the current next 'id' in 'original_shows_24'. In my example I am assuming that 'id' is in use in 'original_shows_24' up to 100 by other records.
Note the 'status' of the events used to generate the repeating events. It is set to '0' in the records that have been processed ('date_start' not equal to 'date_end' and 'category' = 'Weekly_Event')
Another element of the final output that needs to be considered: the generated slug. The slug field is used to generate the link to the unique sub page for that event so it needs to be updated in this operation as well. Note the format in the sample data, basically I need to append a dash ('-') and 'date_start' on the end of the slug for each of the new records generated. All entries in 'slug' must be unique.
It also will need to be able to generate the correct date as the month changes. For example, the last event in the sample data is June 7. This is 7 days after May 31. If MySQL automatically takes care of this in date operations then that is great.
I have only created 'updated_shows_24' to show the output. If the output can just be put back in 'original_shows_24' (with the appropriate records status changed from '1' to '0') that is best. Also 'updated_shows_24' only shows the first two records of the other repeating event just to illustrate the desired output. Since the end date of that repeating event is 2024-08-18, there would be many more records outputted by the solution than just the two IU have included.
Thanks for any help.
hamilton9
(23 rep)
Feb 1, 2024, 11:06 PM
• Last activity: Feb 2, 2024, 06:42 PM
3
votes
1
answers
619
views
What exactly does PostgreSQL (or other databases) do internally when you "prepare" a query rather than just running it directly?
When I face a huge number of repeated INSERTs in a loop, I tend to first create a "prepare skeleton query" prior to the loop, and in the loop, I simply "execute" this prepared query and send it all the values. I heard long ago, and can understand in some abstract sense, that this is more optimized t...
When I face a huge number of repeated INSERTs in a loop, I tend to first create a "prepare skeleton query" prior to the loop, and in the loop, I simply "execute" this prepared query and send it all the values. I heard long ago, and can understand in some abstract sense, that this is more optimized than just having the loop with a normal parameterized query in it.
However, I don't understand what exactly PG is doing that makes it so much faster. If it even is so much faster. I frankly have never done any real benchmarks to compare the two different methods.
Also, shouldn't PG be "warmed up" to a query once you repeat it several (let alone many) times in very short succession, and perhaps do the same thing as I do manually, but internally, when I just use a normal parameterized query?
I find myself constantly second-guessing what the database is up to internally. I have no real grasp of how "smart" it is. I fear that I'm doing a lot of things which are meaningless because it's already taken care of internally by its smart code.
Maybe "preparing" and "executing" is an archaic practice which has no real benefit these days?
It sounds like PG is basically allocating resources to "prepare" for upcoming huge amounts of similar INSERT queries, but I don't understand what exactly it would do differently compared to just executing them one by one. I also don't understand what the point would be of a non-INSERT prepared query/statement.
*PS: Just to clear up any confusion: I never use non-parameterized queries, whether they are prepared or not. A lot of people confuse "parameterized queries" and "prepared statements". Even I called the latter "prepared queries" in this question...*
user214427
Aug 21, 2020, 03:40 PM
• Last activity: Jan 29, 2024, 06:52 PM
-1
votes
2
answers
624
views
executing SQL query with temp tables using 1.6jdk
I created a batch to run a query that has temp tables as the below example IF OBJECT_ID ('tempdb..#TEMP1') IS NOT NULL DROP TABLE #TEMP1 select * into #TEMP1 from cutomer IF OBJECT_ID ('tempdb..#TEMP2') IS NOT NULL DROP TABLE #TEMP2 select * into #TEMP2 from shops select t1.* from #TEMP1 t1, #TEMP2...
I created a batch to run a query that has temp tables as the below example
IF OBJECT_ID ('tempdb..#TEMP1') IS NOT NULL
DROP TABLE #TEMP1
select *
into #TEMP1
from cutomer
IF OBJECT_ID ('tempdb..#TEMP2') IS NOT NULL
DROP TABLE #TEMP2
select *
into #TEMP2
from shops
select t1.*
from #TEMP1 t1, #TEMP2 t2 where t1.id=t2.id
I am using the Java code below:
PreparedStatement statment = connection.prepareStatement(query);
ResultSet rs = statment.executeQuery();
It is working fine when I use 1.8 JRE, but it is throwing the following exception with 1.6 and I have to use 1.6. I am using sqljdbc4.jar only.
>com.microsoft.sqlserver.jdbc.SQLServerException: The statement did not return a result set.
user2074325
(1 rep)
Mar 15, 2019, 10:45 PM
• Last activity: Jan 17, 2024, 05:15 AM
2
votes
3
answers
3922
views
Prepared statements with pgBouncer
I have a PostgreSQL server with pgBouncer as connection pooler. My application is running on Elixir. This is my config file for pgBouncer: ``` * = host=X.X.X.X port=5432 logfile = /var/log/postgresql/pgbouncer.log pidfile = /var/run/postgresql/pgbouncer.pid listen_addr = 0.0.0.0 listen_port = 6432 u...
I have a PostgreSQL server with pgBouncer as connection pooler.
My application is running on Elixir.
This is my config file for pgBouncer:
* = host=X.X.X.X port=5432
logfile = /var/log/postgresql/pgbouncer.log
pidfile = /var/run/postgresql/pgbouncer.pid
listen_addr = 0.0.0.0
listen_port = 6432
unix_socket_dir = /var/run/postgresql
auth_type = trust
auth_file = /etc/pgbouncer/userlist.txt
admin_users = admin
pool_mode = transaction
ignore_startup_parameters = extra_float_digits
server_check_query = select 1
server_check_delay = 30
max_client_conn = 10000
default_pool_size = 5
min_pool_size = 3
reserve_pool_size = 3
server_reset_query = DEALLOCATE ALL;
When I use the *transaction* pooling I'm getting this error:
ERROR 08P01 (protocol_violation) bind message supplies 4 parameters, but prepared statement "ecto_323" requires 2
Then I changed it to *session* pooling mode. This time I'm getting this error:
ERROR 26000 (invalid_sql_statement_name) prepared statement "ecto_83" does not exist
How do I fix this from pgBouncer?
When I connect the DB directly, I didn't see any errors. It was running more than a year without any proxy. We are implementing pgBouncer now.
TheDataGuy
(1986 rep)
Aug 25, 2020, 11:18 AM
• Last activity: Nov 3, 2023, 09:42 AM
1
votes
2
answers
1714
views
MariaDB: Create dynamic PrepareStatement inside a Procedure
I am trying to make a stored procedure which returns me the records of a table that match filters that can be applied in layers. The procedure receives certain variables as parameters and I want to construct a [PrepareStatement][1] that adds the non-null variables as filters. I am using **MariaDB 10...
I am trying to make a stored procedure which returns me the records of a table that match filters that can be applied in layers.
The procedure receives certain variables as parameters and I want to construct a PrepareStatement that adds the non-null variables as filters. I am using **MariaDB 10.6.2**
The table I am working on (removing the foreign keys) looks like:
CREATE OR REPLACE TABLE Thesis_Detail(
thesis_id INT UNSIGNED PRIMARY KEY NOT NULL AUTO_INCREMENT,
title VARCHAR(255) NOT NULL,
year SMALLINT NOT NULL,
file VARCHAR(255) NOT NULL UNIQUE,
abstract TEXT NOT NULL,
uploaded_datetime DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
INDEX(year),
FULLTEXT(title)
) DEFAULT CHARACTER SET utf8mb4;
The goal itself is to create it this way
DELIMITER //
CREATE OR REPLACE PROCEDURE UThesis.searchThesisByFilters(
IN year_in SMALLINT,
IN title_in VARCHAR(255),
IN limit_in TINYINT,
IN offset_in TINYINT
)
BEGIN
DECLARE first BIT DEFAULT 0;
SET @sql = 'SELECT TD.title AS title,' ||
'TD.year AS year,' ||
'TD.file AS path,' ||
'TD.abstract AS abstract,' ||
'TD.thesis_id AS thesis_id ' ||
'FROM Thesis_Detail TD ';
IF NOT ISNULL(title_in) THEN
SET first = 1;
SET @sql = @sql + ' WHERE MATCH(title) AGAINST(? IN NATURAL LANGUAGE MODE)';
END IF;
IF NOT ISNULL(year_in) THEN
IF first THEN
SET @sql = @sql + ' WHERE';
ELSE
SET @sql = @sql + ' AND';
END IF;
SET @sql = @sql + ' TD.year = ?';
END IF;
SET @sql = @sql + ' LIMIT ? OFFSET ?';
PREPARE stmt FROM @sql;
EXECUTE stmt using title_in, year_in, limit_in, offset_in;
DEALLOCATE PREPARE stmt;
END //
DELIMITER ;
The problem is that the following line would be dynamic, that is, it may or may not have the title_in
or year_in
EXECUTE stmt using title_in, year_in, limit_in, offset_in;
EXECUTE stmt using year_in, limit_in, offset_in;
EXECUTE stmt using title_in, limit_in, offset_in;
EXECUTE stmt using limit_in, offset_in;
This example can be solved with combinations of whether one or two are null, but the problem is that I have to apply more filters. In total there are **5 filters** but doing the case of each combination ends up being terrible. Any ideas how I can achieve this?
In the first link they make use of CONCAT, but I don't know if that makes the procedure vulnerable to SQL injections.
CREATE OR REPLACE PROCEDURE UThesis.searchThesisByFilters(
IN year_in SMALLINT,
IN title_in VARCHAR(255),
IN limit_in TINYINT,
IN offset_in TINYINT
)
BEGIN
DECLARE first BIT DEFAULT 0;
SET @sql = 'SELECT TD.title AS title,' ||
'TD.year AS year,' ||
'TD.file AS path,' ||
'TD.abstract AS abstract,' ||
'TD.thesis_id AS thesis_id ' ||
'FROM Thesis_Detail TD ';
IF NOT ISNULL(title_in) THEN
SET first = 1;
SET @sql = @sql + ' WHERE MATCH(title) AGAINST(? IN NATURAL LANGUAGE MODE)';
END IF;
IF NOT ISNULL(title_in) THEN
IF first THEN
SET @sql = @sql + ' WHERE';
ELSE
SET @sql = @sql + ' AND';
END IF;
SET @sql = @sql + CONCAT(' TD.year = ', year_in);
END IF;
SET @sql = @sql + CONCAT(' LIMIT', limit_in, ' OFFSET ', offset_in);
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END //
edjmir
(125 rep)
Jun 26, 2021, 12:24 AM
• Last activity: Jul 11, 2023, 05:40 PM
50
votes
1
answers
34936
views
SQL injection in Postgres functions vs prepared queries
In Postgres, are prepared queries and user defined functions equivalent as a **mechanism for guarding against SQL injection**? Are there particular advantages in one approach over the other?
In Postgres, are prepared queries and user defined functions equivalent as a **mechanism for guarding against SQL injection**?
Are there particular advantages in one approach over the other?
user4930
Sep 11, 2013, 08:14 PM
• Last activity: Apr 9, 2023, 04:33 PM
0
votes
1
answers
920
views
Do prepared statements give a performance advantage when running multiple times with similar parameters?
I've used prepared statements (MySQL & PHP) for many years, but only with an eye to being safe against SQL injection. I've never re-used a prepared statement, I've always built them up every single time I want something back from the database. I currently have a query that runs inside a loop. Each i...
I've used prepared statements (MySQL & PHP) for many years, but only with an eye to being safe against SQL injection. I've never re-used a prepared statement, I've always built them up every single time I want something back from the database.
I currently have a query that runs inside a loop. Each iteration of the loop I build the SQL string, prepare the query, and execute it with the relevant value for the bound parameter.
Someone mentioned that if I instead prepared the query outside of the loop, and inside the loop only bound the new parameter value and executed it, I'd see a performance increase. And not just the nanoseconds that you'd expect from not having to build a string up each loop.
But I ran a test, and I didn't seem to.
Should I have?
I'm talking about MySQL/MariaDB. He uses SQL Server.
Codemonkey
(265 rep)
Feb 9, 2023, 04:02 PM
• Last activity: Feb 9, 2023, 04:50 PM
8
votes
1
answers
6576
views
sp_prepexec (sp_execute) vs. sp_executeSQL
The meat of the question: are actual stored procedures the only mechanism that implements temp table caching or do system stored procedures such as `sp_executeSQL` / `sp_execute` also take advantage of them? I am not a DBA, so please use little words. Our application sends over prepared statements t...
The meat of the question: are actual stored procedures the only mechanism that implements temp table caching or do system stored procedures such as
sp_executeSQL
/ sp_execute
also take advantage of them?
I am not a DBA, so please use little words. Our application sends over prepared statements that, from the profiler, I see run all SQL through sp_prepexec
which is a system procedure for both running sp_prepare
and sp_execute
. What I'm trying to do is figure out if I am benefiting from temp table caching.
I've been using this guide with object_id() to examine behavior
https://sqlkiwi.blogspot.com/2012/08/temporary-tables-in-stored-procedures.html
Then point #3 on this blog post suggests that EXEC cannot use temp table caching, but leaves out whether sp_executeSQL can:
[Link](https://learn.microsoft.com/en-us/archive/blogs/turgays/exec-vs-sp_executesql)
In my query sent over via the client I have created a simple temp table.
DECLARE @foo int; -- set by JDBC, unused but required to force a prepared statement
SELECT 1 AS id
INTO #tmp
SELECT OBJECT_ID('tempdb..#tmp');
In profiler, I can see:
declare @p1 int
set @p1=NULL
exec sp_prepexec @p1 output,N'@P1 int',N'declare @foo INT = @P1
SELECT 1 as id
into #tmp
select Object_id(''tempdb..#tmp'');
DROP TABLE #tmp;',1
select @p1
I also get a cachehit from this. However, the object_id of the temp table appears to be changing on me, which is not the behavior I would see if this temp table were created in a real stored procedure. However, when I run this same code through sp_executeSQL
, I'm also seeing that the object_id of the temp table has changed. This leads me to believe that only "real" user created stored procedures take advantage of temp table caching.
J.T.
(203 rep)
Dec 11, 2014, 06:06 PM
• Last activity: Jan 21, 2023, 11:00 AM
1
votes
0
answers
194
views
MySQL not throwing some errors in prepared statement
If I run the following query: ``` DELETE FROM `table` where `id` = '123%7D'; ``` *Notice the gibberish at the end of the where clause* I get the following error > ERROR 1292 (22007): Truncated incorrect DOUBLE value: '123%7D' Which is the expected behavior since I'm running a MySQL (5.7.33-log) data...
If I run the following query:
DELETE FROM table
where id
= '123%7D';
*Notice the gibberish at the end of the where clause*
I get the following error
> ERROR 1292 (22007): Truncated incorrect DOUBLE value: '123%7D'
Which is the expected behavior since I'm running a MySQL (5.7.33-log) database on strict mode.
**BUT**
If I run the same query on a prepared statement:
PREPARE stmt FROM "delete from table
where id
= ?;";
SET @id = '1234%7D';
EXECUTE stmt USING @id;
The query runs fine without any error, which in my view is not expected behavior
My question is, how can I make prepared queries throw the same errors that they would throw as if they where not prepared?
Thanks in advance.
My **@@sql_mode** value is:
ONLY_FULL_GROUP_BY,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
Marlon de Oliveira dos Santos
(11 rep)
May 26, 2022, 04:08 PM
• Last activity: May 26, 2022, 04:41 PM
0
votes
1
answers
930
views
Postgres changes query plan for prepared statement to something poor after multiple invocations
Postgres 13.4 I have a prepared statement that is similar to this: ```sql PREPARE my_statement AS SELECT * FROM products WHERE (normalized_name = $1 OR distinct_normalized_names @> array['$1']) AND name_matchable = TRUE ORDER BY name ASC, disambiguation ASC NULLS FIRST, official ASC, release_date DE...
Postgres 13.4
I have a prepared statement that is similar to this:
PREPARE my_statement AS
SELECT * FROM products WHERE (normalized_name = $1 OR distinct_normalized_names @> array['$1']) AND name_matchable = TRUE
ORDER BY name ASC, disambiguation ASC NULLS FIRST, official ASC, release_date DESC, product_code ASC, language ASC LIMIT 1
I have a BTree index on name
, normalized_name
, name_matchable
, and a GIST index on distinct_normalized_names
.
I also have an BTree index on the ordering portion of this query, mostly for purposes not related to the prepared statement name ASC, disambiguation ASC NULLS FIRST, official ASC, release_date DESC, product_code ASC, language ASC
When I prepare this statement and run EXPLAIN ANALYZE
on it, I see a reasonable plan and this prepared statement executes very fast:
EXPLAIN ANALYZE EXECUTE my_statement('example');
Limit (cost=105.26..105.26 rows=1 width=1706) (actual time=0.056..0.057 rows=0 loops=1)
-> Sort (cost=105.26..105.28 rows=48 width=1706) (actual time=0.055..0.056 rows=0 loops=1)
Sort Key: name, disambiguation NULLS FIRST, official, release_date DESC, product_code, language
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on products (cost=10.16..105.21 rows=48 width=1706) (actual time=0.043..0.044 rows=0 loops=1)
Recheck Cond: ((normalized_name = 'example'::text) OR (distinct_normalized_names @> '{example}'::text[]))
Filter: name_matchable
-> BitmapOr (cost=10.16..10.16 rows=48 width=0) (actual time=0.042..0.042 rows=0 loops=1)
-> Bitmap Index Scan on index_products_on_normalized_name (cost=0.00..2.11 rows=20 width=0) (actual time=0.018..0.018 rows=0 loops=1)
Index Cond: (normalized_name = 'example'::text)
-> Bitmap Index Scan on index_products_on_distinct_normalized_names_gin (cost=0.00..8.04 rows=28 width=0) (actual time=0.023..0.023 rows=0 loops=1)
Index Cond: (distinct_normalized_names @> '{example}'::text[])
Planning Time: 1.624 ms
Execution Time: 0.157 ms
However, if I invoke the prepared statement around ten times or more, Postgres suddenly switches the plan to something much worse, using a field I did not WHERE-clause on and it remains there until I DEALLOCATE
the prepared statement.
-- If run this statement 10 or so times, then the plan below is provided
EXPLAIN ANALYZE EXECUTE my_statement('example');
Limit (cost=53.67..104.57 rows=1 width=1706) (actual time=763.908..763.909 rows=0 loops=1)
-> Incremental Sort (cost=53.67..87248.70 rows=1713 width=1706) (actual time=763.906..763.907 rows=0 loops=1)
Sort Key: name, disambiguation NULLS FIRST, official, release_date DESC, product_code, language
Presorted Key: name
Full-sort Groups: 1 Sort Method: quicksort Average Memory: 25kB Peak Memory: 25kB
-> Index Scan using index_products_on_name on products (cost=0.08..87228.90 rows=1713 width=1706) (actual time=763.888..763.888 rows=0 loops=1)
Filter: (name_matchable AND ((normalized_name = $1) OR (distinct_normalized_names @> ARRAY[$1])))
Rows Removed by Filter: 338960
Planning Time: 0.015 ms
Execution Time: 764.064 ms
As you can see, the total time ballooned from ~3ms to 760ms. This prepared statement is used heavily, so this isn't ideal. I can watch this problem happen in real time on a database under load by PREPARE
ing the statement, running it about 10 times, then seeing the performance plummet and the EXPLAIN
change.
It looks like Postgres is suddenly deciding that the prepared statement should use a different index and a much different strategy, but this is worse performance.
I was able to work around this issue by not using a prepared statement at all, and I don't see the issue occur if I use raw SQL, even after repeated invocations.
Why does Postgres decide to change the plan for the statement after repeated use?
Karew
(182 rep)
Sep 13, 2021, 08:03 AM
• Last activity: Sep 13, 2021, 06:59 PM
1
votes
1
answers
67
views
Speed up Postgres OLTP - eliminate plan time
We have an application with millions of similar queries that are executed in the following way: UPDATE flow.thread tr SET state = $1, updated_at = now() WHERE tr.conversation_number = $2 AND tr.partner_number = $3 AND tr.sender_number = $4 AND tr.channel = $5 AND tr.status IN ($6,$7,$8) As you may s...
We have an application with millions of similar queries that are executed in the following way:
UPDATE
flow.thread tr
SET
state = $1,
updated_at = now()
WHERE
tr.conversation_number = $2
AND tr.partner_number = $3
AND tr.sender_number = $4
AND tr.channel = $5
AND tr.status IN ($6,$7,$8)
As you may see, this query is parameterized, so does that mean it's already prepared? In
pg_stat_statements
view I can see that this query has 1,065,893 calls and the query has 1,065,893 plans. So looks like we did not get rid of execution plan building or parsing time.
Could you help me figure out this: Doesn't a parameterized query mean that the statement is PREPARED?
Mikhail Aksenov
(430 rep)
Aug 13, 2021, 07:44 AM
• Last activity: Aug 17, 2021, 06:22 AM
1
votes
3
answers
1793
views
When do I need to DEALLOCATE a prepared statement in a MySQL stored procedure?
My question is simple. In what situations is it better to deallocate the prepared statement and thus removing it from memory and in what situations is it better to keep it in memory? I have written a stored procedure that uses two prepared statements. One of the first few lines is to prepare the sta...
My question is simple. In what situations is it better to deallocate the prepared statement and thus removing it from memory and in what situations is it better to keep it in memory?
I have written a stored procedure that uses two prepared statements. One of the first few lines is to prepare the statement like so:
PREPARE selectStatementByJournalEntryAndLanguageQuery
FROM "INSERT INTO results(id, content, title, language_id, journalentry_id)
SELECT *
FROM JournalEntryTitleAndContent jetc
WHERE jetc.language_id = ?
AND jetc.journalentry_id = ?
LIMIT 1";
PREPARE selectStatementByJournalEntryQuery
FROM "INSERT INTO results(id, content, title, language_id, journalentry_id)
SELECT * FROM JournalEntryTitleAndContent jetc
WHERE jetc.journalentry_id = ?
LIMIT 1";
Now this stored procedure and thus these prepared statements could be used many times a day by many users. In this case it is better to not call
DEALLOCATE
on both prepared statements, right? Because then the prepared statements remain in memory and do not have to be recompiled when someone else uses the stored procedure.
But that raises a question about the two PREPARE
statement lines. If I do not deallocate the prepared statements at the end and someone else starts the stored procedure, will the database then try to PREPARE
the prepared statements again or will it ignore these two code lines when it detects that those prepared statements have already been prepared during another database stored procedure query session?
Thank you.
Maurice
(147 rep)
Mar 3, 2020, 01:40 PM
• Last activity: Jul 10, 2021, 12:50 PM
Showing page 1 of 20 total questions