Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
67
views
Slow queries on JSON_TABLE based view on mariaDB
We a running an instance of [Apache syncope][1] with around 130k users (ca. 33k with `active` or `pending` state). When searching for users by attributes we see long query times often over 50 seconds. One example query might be ``` select * from user_search_attr where schema_id = 'familyName' and st...
We a running an instance of Apache syncope with around 130k users (ca. 33k with
active
or pending
state). When searching for users by attributes we see long query times often over 50 seconds. One example query might be
select * from user_search_attr
where schema_id = 'familyName'
and stringvalue = 'Bergmann'
user_search_attr
is a view defined as follows:
CREATE OR REPLACE
ALGORITHM = UNDEFINED VIEW user_search_attr
AS
select
u
.id
AS any_id
,
attrs
.schema_id
AS schema_id
,
attrs
.booleanvalue
AS booleanvalue
,
attrs
.datevalue
AS datevalue
,
attrs
.doublevalue
AS doublevalue
,
attrs
.longvalue
AS longvalue
,
attrs
.stringvalue
AS stringvalue
from
(SyncopeUser
u
left join JSON_TABLE(coalesce(u
.plainAttrs
, '[]'), '$[*]'
COLUMNS (schema_id
varchar(255) PATH '$.schema',
NESTED PATH '$.values[*]'
COLUMNS (booleanvalue
int(11) PATH '$.booleanValue',
datevalue
varchar(32) PATH '$.dateValue',
doublevalue
double PATH '$.doubleValue',
longvalue
bigint(20) PATH '$.longValue',
stringvalue
varchar(255) PATH '$.stringValue'))) attrs
on
(1 = 1))
where
attrs
.schema_id
is not null
and (attrs
.booleanvalue
is not null
or attrs
.datevalue
is not null
or attrs
.doublevalue
is not null
or attrs
.longvalue
is not null
or attrs
.stringvalue
is not null);
As you can see the data comes from the table SyncopeUser
which is defined as follows:
CREATE TABLE SyncopeUser
(
id
varchar(36) NOT NULL,
creationContext
varchar(255) DEFAULT NULL,
creationDate
datetime(3) DEFAULT NULL,
creator
varchar(255) DEFAULT NULL,
lastChangeContext
varchar(255) DEFAULT NULL,
lastChangeDate
datetime(3) DEFAULT NULL,
lastModifier
varchar(255) DEFAULT NULL,
status
varchar(255) DEFAULT NULL,
changePwdDate
datetime(3) DEFAULT NULL,
cipherAlgorithm
varchar(20) DEFAULT NULL,
failedLogins
int(11) DEFAULT NULL,
lastLoginDate
datetime(3) DEFAULT NULL,
mustChangePassword
int(11) DEFAULT NULL,
password
varchar(255) DEFAULT NULL,
passwordHistory
text DEFAULT NULL,
securityAnswer
varchar(255) DEFAULT NULL,
suspended
int(11) DEFAULT NULL,
token
text DEFAULT NULL,
tokenExpireTime
datetime(3) DEFAULT NULL,
username
varchar(255) DEFAULT NULL,
plainAttrs
longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_bin
DEFAULT NULL CHECK (json_valid(plainAttrs
)),
REALM_ID
varchar(36) DEFAULT NULL,
SECURITYQUESTION_ID
varchar(36) DEFAULT NULL,
PRIMARY KEY (id
),
UNIQUE KEY U_SYNCPSR_USERNAME
(username
),
UNIQUE KEY SyncopeUser_username
(username
),
KEY SECURITYQUESTION_ID
(SECURITYQUESTION_ID
),
KEY SyncopeUser_realm_id
(REALM_ID
),
CONSTRAINT SyncopeUser_ibfk_1
FOREIGN KEY (REALM_ID
)
REFERENCES Realm
(id
),
CONSTRAINT SyncopeUser_ibfk_2
FOREIGN KEY (SECURITYQUESTION_ID
)
REFERENCES SecurityQuestion
(id
)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3 COLLATE=utf8mb3_general_ci;
The relevant column plainAttrs
contains the users attibutes as JSON string. One example might be [{"uniqueValue":{"stringValue":"123456789"},"schema":"lockSystemId"},{"values":[{"stringValue":"Bergmann"}],"schema":"familyName"}]
.
The following indexes are defined on SyncopeUser
:
Table |Non_unique|Key_name |Seq_in_index|Column_name |Collation|Cardinality|Sub_part|Packed|Null|Index_type|Comment|Index_comment|Ignored|
-----------+----------+--------------------+------------+-------------------+---------+-----------+--------+------+----+----------+-------+-------------+-------+
SyncopeUser| 0|PRIMARY | 1|id |A | 149635| | | |BTREE | | |NO |
SyncopeUser| 0|U_SYNCPSR_USERNAME | 1|username |A | 149635| | |YES |BTREE | | |NO |
SyncopeUser| 0|SyncopeUser_username| 1|username |A | 149635| | |YES |BTREE | | |NO |
SyncopeUser| 1|SECURITYQUESTION_ID | 1|SECURITYQUESTION_ID|A | 1| | |YES |BTREE | | |NO |
SyncopeUser| 1|SyncopeUser_realm_id| 1|REALM_ID |A | 1| | |YES |BTREE | | |NO |
As most normal users have over 15 attributes the user_search_attr
view contains over 2M rows.
The servers are VMs with currently 8 GB of RAM and 6 CPUs but this can be changed. We are currently running version 11.4.7 of MariaDB in a galera cluster of 3 Servers. Syncope is only connected to one of the servers as this cluster is also used for other applications and we don't want to influnce these other applications.
Here are some of the IMHO relevant configuration options:
[mysqld]
aria_pagecache_buffer_size = 32M
basedir = /usr
bind-address = ::
binlog_format = ROW
character_set_server = utf8
collation_server = utf8_general_ci
innodb_adaptive_hash_index = OFF
innodb_autoinc_lock_mode = 2
innodb_buffer_pool_instances = 1
innodb_buffer_pool_size = 4096M
innodb_flush_log_at_trx_commit = 0
innodb_log_buffer_size = 16MB
innodb_log_file_size = 512M
key_buffer_size = 1M
log-error = /var/log/mariadb/mariadb.log
log_queries_not_using_indexes = OFF
long_query_time = 4.0
max_allowed_packet = 128M
max_binlog_size = 100M
max_connections = 400
max_heap_table_size = 256M
performance_schema = ON
query_cache_limit = 16M
query_cache_size = 0
query_cache_type = OFF
skip-external-locking
skip-name-resolve = 1
slow_query_log = 0
slow_query_log_file = /var/log/mariadb/slow.log
sort_buffer_size = 4M
table_definition_cache = 800
table_open_cache = 1000
thread_cache_size = 8
thread_stack = 256K
tmp_table_size = 256M
wsrep_on = ON
wsrep_provider = /usr/lib64/galera-4/libgalera_smm.so
As most queries only search for active
or pending
users one option that I have not tried is indexing or partitioning the table for the status
column.
Currently we are not in the position of changing the design of the database or even switching to another IDM sollution. We hope to find optimazion potential on the DB layer or maybe some improvements on some queries.
Clemens Bergmann
(111 rep)
Aug 3, 2025, 06:44 AM
• Last activity: Aug 5, 2025, 08:07 AM
0
votes
2
answers
144
views
MariaDB Spider Engine UDF spider_bg_direct_sql can't find TEMPORARY table
I am testing the Spider engine in MariaDB. In version mariadb-10.1.12-linux-x86_64 the UDF works, but there are some issues so I want to upgrade to a newer version. However, mariadb-10.2.18-linux-x86_64,mariadb-10.3.12-linux-x86_64,mariadb-10.3.9-linux-x86_64,mariadb-10.4.1-linux-x86_64 the Spider U...
I am testing the Spider engine in MariaDB. In version mariadb-10.1.12-linux-x86_64 the UDF works, but there are some issues so I want to upgrade to a newer version. However, mariadb-10.2.18-linux-x86_64,mariadb-10.3.12-linux-x86_64,mariadb-10.3.9-linux-x86_64,mariadb-10.4.1-linux-x86_64 the Spider UDF [
CREATE TEMPORARY TABLE test.test1(
spider_bg_direct_sql
](https://mariadb.com/kb/en/library/spider_bg_direct_sql/) can't find the TEMPORARY
table even though I can select that temporary table outside of the UDF.
Does anyone know how to solve this?


IMSI
VARCHAR(20) NOT NULL,
DATA_DATE
DATE NOT NULL,
PU_ID
MEDIUMINT(9) NOT NULL DEFAULT 0,
DATA_DATE_TS
INT(11) NOT NULL,
TECH_MASK
TINYINT(2) NOT NULL,
PRIMARY KEY (IMSI
,DATA_DATE
,PU_ID
,DATA_DATE_TS
,TECH_MASK
)
)
SELECT * FROM test.test1 #this command can find a table
SELECT spider_bg_direct_sql('select * from gt_global_imsi
.table_imsi_pu
'
, 'test.test1'
, 'srv "PU1", port "3308"'
)
#error msg
#Error Code: 12703
#Temporary table 'test.test1' is not found
My spider settings:
Variable_name Value
spider_auto_increment_mode -1
spider_bgs_first_read -1
spider_bgs_mode -1
spider_bgs_second_read -1
spider_bka_engine
spider_bka_mode -1
spider_bka_table_name_type -1
spider_block_size 16384
spider_bulk_size -1
spider_bulk_update_mode -1
spider_bulk_update_size -1
spider_casual_read -1
spider_conn_recycle_mode 0
spider_conn_recycle_strict 0
spider_conn_wait_timeout 10
spider_connect_error_interval 1
spider_connect_mutex OFF
spider_connect_retry_count 1000
spider_connect_retry_interval 1000
spider_connect_timeout -1
spider_crd_bg_mode -1
spider_crd_interval -1
spider_crd_mode -1
spider_crd_sync -1
spider_crd_type -1
spider_crd_weight -1
spider_delete_all_rows_type -1
spider_direct_dup_insert -1
spider_direct_order_limit -1
spider_dry_access OFF
spider_error_read_mode -1
spider_error_write_mode -1
spider_first_read -1
spider_force_commit 1
spider_general_log OFF
spider_index_hint_pushdown OFF
spider_init_sql_alloc_size -1
spider_internal_limit -1
spider_internal_offset -1
spider_internal_optimize -1
spider_internal_optimize_local -1
spider_internal_sql_log_off -1
spider_internal_unlock OFF
spider_internal_xa OFF
spider_internal_xa_id_type 0
spider_internal_xa_snapshot 0
spider_load_crd_at_startup -1
spider_load_sts_at_startup -1
spider_local_lock_table OFF
spider_lock_exchange OFF
spider_log_result_error_with_sql 0
spider_log_result_errors 0
spider_low_mem_read -1
spider_max_connections 0
spider_max_order -1
spider_multi_split_read -1
spider_net_read_timeout -1
spider_net_write_timeout -1
spider_ping_interval_at_trx_start 3600
spider_quick_mode -1
spider_quick_page_size -1
spider_read_only_mode -1
spider_remote_access_charset
spider_remote_autocommit -1
spider_remote_default_database
spider_remote_sql_log_off -1
spider_remote_time_zone
spider_remote_trx_isolation -1
spider_reset_sql_alloc -1
spider_same_server_link OFF
spider_second_read -1
spider_select_column_mode -1
spider_selupd_lock_mode -1
spider_semi_split_read -1
spider_semi_split_read_limit -1
spider_semi_table_lock 1
spider_semi_table_lock_connection -1
spider_semi_trx ON
spider_semi_trx_isolation -1
spider_skip_default_condition -1
spider_skip_parallel_search -1
spider_split_read -1
spider_store_last_crd -1
spider_store_last_sts -1
spider_sts_bg_mode -1
spider_sts_interval -1
spider_sts_mode -1
spider_sts_sync -1
spider_support_xa ON
spider_sync_autocommit ON
spider_sync_trx_isolation ON
spider_table_crd_thread_count 10
spider_table_init_error_interval 1
spider_table_sts_thread_count 10
spider_udf_ct_bulk_insert_interval -1
spider_udf_ct_bulk_insert_rows -1
spider_udf_ds_bulk_insert_rows -1
spider_udf_ds_table_loop_mode -1
spider_udf_ds_use_real_table -1
spider_udf_table_lock_mutex_count 20
spider_udf_table_mon_mutex_count 20
spider_use_all_conns_snapshot OFF
spider_use_consistent_snapshot OFF
spider_use_default_database ON
spider_use_flash_logs OFF
spider_use_handler -1
spider_use_pushdown_udf -1
spider_use_snapshot_with_flush_tables 0
spider_use_table_charset -1
spider_version 3.3.13
spider_xa_register_mode 1
Toby Chen
(1 rep)
Jan 24, 2019, 04:53 AM
• Last activity: Jul 18, 2025, 12:01 PM
16
votes
2
answers
4817
views
Is it possible for SQL statements to execute concurrently within a single session in SQL Server?
I have written a stored procedure which makes use of a temporary table. I know that in SQL Server, temporary tables are session-scoped. However, I have not been able to find definitive information on exactly what a session is capable of. In particular, if it is possible for this stored procedure to...
I have written a stored procedure which makes use of a temporary table. I know that in SQL Server, temporary tables are session-scoped. However, I have not been able to find definitive information on exactly what a session is capable of. In particular, if it is possible for this stored procedure to execute twice concurrently in a single session, a significantly higher isolation level is required for a transaction within that procedure due to the two executions now sharing a temporary table.
Trevor Giddings
(275 rep)
Apr 18, 2019, 05:37 PM
• Last activity: Jun 27, 2025, 01:57 PM
7
votes
1
answers
209
views
SQL Server deadlock on temp tables in different connections
I'm getting occasional deadlocks on temp tables in two different connections (at least as far as I can tell, the deadlock graph shows different spids). The deadlock graph (lightly redacted, and with the rest of truncated queries added in) is below. I don't understand how this is possible, because te...
I'm getting occasional deadlocks on temp tables in two different connections (at least as far as I can tell, the deadlock graph shows different spids). The deadlock graph (lightly redacted, and with the rest of truncated queries added in) is below.
I don't understand how this is possible, because temp tables shouldn't be shared between different connections. These are NOT global temp tables (as you can see). So they shouldn't be able to deadlock.
Also confusing, and possibly related, one of the temp table names listed in the deadlock graph is a random string of letters and numbers, which isn't referenced in the stored procedure that is running, like "#B825DAD5".
Also a little odd, and again possibly related, the lock mode causing the problem is Sch-M, but there are no schema modifications made to any tables or temp tables in the query. I suppose creating and dropping temp tables could count as schema modifications? But again, they are in different connections. Temp tables are created with CREATE, not SELECT INTO, if that matters.
My top suspects right now are either a bug in SQL Server or something strange because there are some nolocks on some of the tables in the query. But I would think someone on the internet would have seen this issue before, and I've found nothing like this in the searches I've tried. There are some references to bugs in SQL server that caused deadlocks between temp tables, but those bus were fixed over ten years ago, and they didn't look exactly the same anyway.
Has anyone seen this kind of thing? Can anyone explain those weird temp table names? Does anyone know a workaround or fix?
UPDATE t
SET Adv = 1
FROM dbo.SQ sq (nolock)
JOIN #tmpSDTR t on t.SQUUID = sq.SQUUID
and t.DeliverySequence = sq.DeliverySequence
JOIN dbo.SQQMetaData m (nolock) on m.SQUUID = t.SQUUID
and m.DeliverySequence = t.DeliverySequence
WHERE m.Name = 'Adv'
and m.Value = 1
Proc [Database Id = 7 Object Id = 1716201164]
update t
set
SQQueueID = q.SQQueueId,
SUUID = q.SUUID,
SRUUID = r.SRUUID,
ProjectKey = q.ProjectID,
L = q.L,
MQPID = q.MQPId,
StatusID = q.StatusID,
[Priority] = q.[Priority],
CreatedDate = q.CreatedDate,
RTID = r.RTID,
OrdinalPosition = r.DisplayOrdinal,
ProcessID = @ProcessID,
SRMetaDataName = md.Name,
SRMetaDataValue = md.Value
--select *
from #tmpSDTR t
JOIN dbo.SQQueue q on q.SQUUID = t.SQUUID
LEFT OUTER JOIN dbo.SRQueue r on r.SQUUID = q.SQUUID
AND r.DeliverySequence = q.DeliverySequence
LEFT OUTER JOIN dbo.SRQMetaData md on r.SRUUID = md.SRUUID
AND md.DeliverySequence = r.DeliverySequence
where t.PCId is null
Proc [Database Id = 7 Object Id = 1716201164]
**UPDATE 2025/04/07**
It looks like we are behind a touch on cumulative updates (we're on 28, current is 32), so we'll be trying to get that upgraded ASAP, but I can't say yet when that will happen. If we get the error again after the CU, I'll post another update here.
The stored proc has a bunch of different execution plans listed when I query sys.dm_exec_procedure_stats/sys.dm_exec_query_plan. There were 13 the last time I checked. I downloaded all of them, and they were all identical, EXCEPT for the table name in one small section:
You can see that odd temp table name there, and it is the same in all of the plans, EXCEPT for one, where it instead said: Table = "[tmpSDTR_____________________________________________________________________________________________000000001FF8]"
There were just 6 executions of this plan, compared to thousands to millions for most other plans. The query referenced in this section is not one of the two that trigger the deadlock, so I don't know if this is related.
Anyway, since the plans are pretty much all identical, I don't think parameter sniffing is causing different execution plans.
**UPDATE 2025/04/09**
Well, we updated to the latest Cumulative Update this morning, and now we have a new mysterious deadlock, but at least it doesn't involve temp tables in different connections. Assuming we don't see this weird deadlock again soon, I'm ready to close out this chapter, and if Martin Smith feels like posting his suggestion as an answer, I'd be happy to accept it.
user12861
(171 rep)
Apr 4, 2025, 02:53 PM
• Last activity: Jun 3, 2025, 01:56 PM
1
votes
1
answers
250
views
See content of temporary Table in Oracles' EXPLAIN
I'm executing a Query in Oracle SQL Developer and eventually call ``` SELECT * FROM TABLE ( dbms_xplan.display ); ``` to get an impression what Oracle just did. There's a column *Name* in the output that states the name of the index that is used in an operation or the name of the table that is being...
I'm executing a Query in Oracle SQL Developer and eventually call
SELECT * FROM TABLE ( dbms_xplan.display );
to get an impression what Oracle just did. There's a column *Name* in the output that states the name of the index that is used in an operation or the name of the table that is being accessed.
Every once in a while it says something like *SYS_TEMP_0FD9D6B47_384FBF5*. I'm not sure what of make of it. I guess this is a temporary table created from a WITH clause. How can I see what's the content or the SQL code behind this table?
Bernhard Döbler
(217 rep)
Oct 16, 2020, 04:55 PM
• Last activity: May 28, 2025, 01:05 PM
4
votes
2
answers
21718
views
In Postgres, how do I adjust the "pgsql_tmp" setting?
I"m using Postgres 9.5.4 on Ubuntu 14.04. I have my Postgres data on a separate disk partiion, which is getting pretty full. myuser@myproject:~$ df -h /mnt/volume-nyc1-01/ Filesystem Size Used Avail Use% Mounted on /dev/sda 99G 93G 413M 100% /mnt/volume-nyc1-01 I want to delete some data from my par...
I"m using Postgres 9.5.4 on Ubuntu 14.04. I have my Postgres data on a separate disk partiion, which is getting pretty full.
myuser@myproject:~$ df -h /mnt/volume-nyc1-01/
Filesystem Size Used Avail Use% Mounted on
/dev/sda 99G 93G 413M 100% /mnt/volume-nyc1-01
I want to delete some data from my partition, but this has become challenging. In particular, when I run some queries, i get results like this
myproject_production=> select count(*) FROM my_object_times rt1, my_object_times rt2 where rt1.my_object_id = rt2.my_object_id and rt1.name = rt2.name and rt1.time_in_ms = rt2.time_in_ms and rt1.id > rt2.id;;
ERROR: could not write block 52782 of temporary file: No space left on device
I want to free up some temp space so I can run queries and identify what data I need to delete. I have some other free space on another partition. How do I point my pgsql_tmp variable there so that I can run the queries I need?
Edit:
As the symlink option seemed to be the least invasive, I gave it a go, setting up things like this
myuser@myproject:~$ sudo ls -al /mnt/volume-nyc1-01/postgresql/9.5/main/base/pgsql_tmp
lrwxrwxrwx 1 root root 14 Apr 10 18:01 /mnt/volume-nyc1-01/postgresql/9.5/main/base/pgsql_tmp -> /opt/pgsql_tmp
myuser@myproject:~$ cd /opt
myuser@myproject:/opt$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/disk/by-uuid/050e1e34-39e6-4072-a03e-ae0bf90ba13a 40G 24G 15G 62% /
You can see that I have over 16GB available on the partition where I pointed it to, but still I get the errors
ERROR: could not write block 1862514 of temporary file: No space left on device
Dave
(753 rep)
Apr 10, 2017, 07:19 PM
• Last activity: May 8, 2025, 01:33 PM
0
votes
1
answers
416
views
MariaDB: Why is this still using temporary and filesort - is it because I'm using a self-join? How to fix?
```EXPLAIN SELECT e2.personId, COUNT(*) FROM entries e1 JOIN entries e2 ON e2.categoryId = e1.categoryId AND e2.personId != e1.personId WHERE e1.personId = 1 GROUP BY e2.personId ``` gives: ``` id select_type table type key key_length ref rows extra 1 SIMPLE e1 ref personId_categoryId 4 const 59 Usi...
SELECT e2.personId, COUNT(*)
FROM entries e1
JOIN entries e2 ON e2.categoryId = e1.categoryId
AND e2.personId != e1.personId
WHERE e1.personId = 1
GROUP BY e2.personId
gives:
id select_type table type key key_length ref rows extra
1 SIMPLE e1 ref personId_categoryId 4 const 59 Using index; Using temporary; Using filesort
1 SIMPLE e2 ref categoryId_personId 4 project.e1.categoryId 8 Using where; Using index
If I remove the GROUP BY
I get "Using index".
What's the problem here? Is it something to do with joining a table onto itself?
(The indexes present on the table are the two shown in the explain output, containing the columns that the names suggest.)
DDL:
CREATE TABLE entries
(
id
int(10) unsigned NOT NULL AUTO_INCREMENT,
personId
int(10) unsigned NOT NULL,
categoryId
int(10) unsigned NOT NULL,
PRIMARY KEY (id
),
KEY personId_categoryId
(personId
,categoryId
),
KEY categoryId_personId
(categoryId
,personId
),
CONSTRAINT entries_ibfk_1
FOREIGN KEY (personId
) REFERENCES people
(id
) ON UPDATE CASCADE,
CONSTRAINT entries_ibfk_2
FOREIGN KEY (categoryId
) REFERENCES categories
(id
) ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=1465605 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
Codemonkey
(265 rep)
Jul 10, 2020, 07:48 AM
• Last activity: May 4, 2025, 05:07 AM
4
votes
1
answers
2598
views
How to put an execution time limit for MySQL Temporary table creation
We can use `SET MAX_EXECUTION_TIME=10;` statement to stop the execution of a `SELECT` statement, if it took more than 10 milliseconds to execute the query. But the same wont work for other statements like `CREATE TEMPORARY TABLE` and `INSERT` etc. Reference: https://dev.mysql.com/doc/refman/5.7/en/o...
We can use
SET MAX_EXECUTION_TIME=10;
statement to stop the execution of a SELECT
statement, if it took more than 10 milliseconds to execute the query. But the same wont work for other statements like CREATE TEMPORARY TABLE
and INSERT
etc.
Reference: https://dev.mysql.com/doc/refman/5.7/en/optimizer-hints.html#optimizer-hints-execution-time
Currently my temporary table creation takes more than 50s to complete. I want to put a limit like 10s. Is it possible in MySQL?
Sarath S Nair
(163 rep)
Mar 12, 2018, 05:34 AM
• Last activity: Apr 26, 2025, 01:09 PM
4
votes
1
answers
6085
views
Values of created_tmp_disk_tables increases
I noticed that the values of Created_tmp_disk_tables increased. At a certain time, MYD files are created in /tmp directory and my site run very slowly. I am forced to restart the mysql service to runing my site. What is the cause of the creation of disk table in tmp directory and the stop of the mys...
I noticed that the values of Created_tmp_disk_tables increased.
At a certain time, MYD files are created in /tmp directory and my site run very slowly.
I am forced to restart the mysql service to runing my site.
What is the cause of the creation of disk table in tmp directory and the stop of the mysql?
My server:
- Ubuntu 12
- RAM : 32 G
- Processors Number: 12
`my.cnf
key_buffer_size = 1024M
max_allowed_packet = 512M
thread_stack = 256K
thread_cache_size = 8
read_buffer_size = 2M
join_buffer_size = 2M
tmp_table_size = 256M
max_heap_table_size = 256M
bulk_insert_buffer_size = 256M
open-files-limit = 2048
query_cache_type = 0
query_cache_limit = 0
query_cache_size = 0
innodb_file_per_table
innodb_table_locks = true
innodb_lock_wait_timeout = 60
innodb_thread_concurrency = 8
innodb_commit_concurrency = 2
innodb_support_xa = true
innodb_buffer_pool_size = 16G
innodb_log_file_size = 64M
innodb_additional_mem_pool_size = 16M
innodb_data_file_path = ibdata1:10M:autoextend
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size = 64M
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
[myisamchk]
key_buffer = 16M
myisam_sort_buffer_size = 16M
myisam_max_sort_file_size = 2147483648
myisam_repair_threads = 1
myisam-recover = BACKUP`
mysql> show global status like 'Created_tmp_%tables';
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 25461 |
| Created_tmp_tables | 29134 |
+-------------------------+-------+
2 rows in set (0.00 sec)
mysql> show global status like 'Created_tmp_%tables';
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 25566 |
| Created_tmp_tables | 29327 |
+-------------------------+-------+
2 rows in set (0.00 sec)
mysql> show global status like 'Created_tmp_%tables';
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 25566 |
| Created_tmp_tables | 29328 |
+-------------------------+-------+
2 rows in set (0.00 sec)
mysql> show global status like 'Created_tmp_%tables';
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 25566 |
| Created_tmp_tables | 29329 |
+-------------------------+-------+
2 rows in set (0.00 sec)
mysql> show global status like 'Created_tmp_%tables';
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 25570 |
| Created_tmp_tables | 29357 |
+-------------------------+-------+
2 rows in set (0.00 sec)
Mohamed Ben HEnda
(153 rep)
Dec 23, 2016, 01:09 PM
• Last activity: Apr 20, 2025, 06:03 AM
1
votes
3
answers
834
views
Copy table structure excluding nullability into local temporary table
How can I copy the structure of a table into a temp table (or table variable) **excluding the nullability** of each column? Mainly I need to copy the column names and data types, but not nullability, keys, indexes, etc. I'll also note that this must be done programmatically within a procedure, so I...
How can I copy the structure of a table into a temp table (or table variable) **excluding the nullability** of each column? Mainly I need to copy the column names and data types, but not nullability, keys, indexes, etc. I'll also note that this must be done programmatically within a procedure, so I can't just generate a
create table
script and modify it. The best I've come up with has been:
select top 0 *
into #MyTempTable
from dbo.MyTable
which copies the nullability of each field, thus defeating the purpose for me.
I've also played around with dynamic SQL and extracting the column data from table INFORMATION_SCHEMA.COLUMNS
to build the create table
statement, but the issue there is that the temp table goes out of scope after the dynamic SQL statement executes and control is returned to the "main" procedure. (And I'd rather not jam the rest of the procedure into the dynamic SQL statement.) If there were a way to return the temp table from the exec (@Sql)
statement or keep it in scope somehow, it might work, but I don't know that there's a way to do that.
Related questions:
- Copy complete structure of a table
- Create table structure from existing table
neizan
(113 rep)
Nov 12, 2015, 02:04 PM
• Last activity: Apr 7, 2025, 12:46 PM
0
votes
1
answers
3553
views
How to create multiple temp tables using records from a CTE that I need to call multiple times in Postgres plpgsql Procedure?
I am already using a CTE expression within a plpgsql Procedure to grab some Foreign Keys from (1) specific table, we can call it `master_table`. I created a brand new table, we can call this table `table_with_fks`, in my DDL statements so this table holds the FKs I am fetching and saving. I later ta...
I am already using a CTE expression within a plpgsql Procedure to grab some Foreign Keys from (1) specific table, we can call it
My CTE example:
master_table
. I created a brand new table, we can call this table table_with_fks
, in my DDL statements so this table holds the FKs I am fetching and saving.
I later take these FKs from my table_with_fks
and JOIN on my other tables in my database to get the entire original record (the full record with all columns from its corresponding table) and insert it into an archive table.
I have an awesome lucid chart I drew that might make what I say down below make much more sense:

LOOP
EXIT WHEN some_condition;
WITH fk_list_cte AS (
SELECT mt.fk1, mt.fk2, mt.fk3, mt.fk4
FROM master_table mt
WHERE mt.created_date = 1 year old
LIMIT 10000
)
INSERT INTO table_with_fks (SELECT * FROM fk_list_cte);
commit;
END LOOP;
Now, I have (4) other Procedures that JOIN
on each FK in this table_with_fks
with its **parent** table that it references. I do this because as I said, I only got the FK at first, and I don't have all the original columns for the record. So I will do something like
LOOP
EXIT WHEN some_condition;
WITH full_record_cte AS (
SELECT *
FROM table_with_fks fks
JOIN parent_table1 pt1
ON fks.fk1 = pt1.id
LIMIT 10000),
INSERT INTO (select * from full_record_cte);
commit;
END LOOP;
***NOW***, what I want to do, is instead of having to RE-JOIN 4 times later on these FK's that are found in my table_with_fks
, I want to use the first CTE fk_list_cte
to JOIN on the parent tables right away and grab the full record from each (4) tables and put it in some TEMP postgres table. I think I will need (4) unique TEMP tables, as I don't know how it would work if I combine all their data into one BIG table, because each table has different data/different columns.
Is there a way to use the original CTE fk_list_cte
and call it multiple times in succession and CREATE 4 TEMP tables right after, that all use the original CTE? example:
LOOP
EXIT WHEN some_condition;
WITH fk_list_cte AS (
SELECT mt.fk1, mt.fk2, mt.fk3, mt.fk4
FROM master_table mt
WHERE mt.created_date = 1 year old
LIMIT 10000
),
WITH fetch_fk1_original_record_from_parent AS (
SELECT *
FROM fk_list_cte cte
JOIN parent_table1 pt1
ON cte.fk1 = pt1.id
),
WITH fetch_fk2_original_record_from_parent AS (
SELECT *
FROM fk_list_cte cte
JOIN parent_table2 pt2
ON cte.fk2 = pt2.id
),
WITH fetch_fk3_original_record_from_parent AS (
SELECT *
FROM fk_list_cte cte
JOIN parent_table3 pt3
ON cte.fk3 = pt3.id
),
WITH fetch_fk4_original_record_from_parent AS (
SELECT *
FROM fk_list_cte cte
JOIN parent_table4 pt4
ON cte.fk4 = pt4.id
),
CREATE TEMPORARY TABLE fk1_tmp_tbl AS (
SELECT *
FROM fetch_fk1_original_record_from_parent
)
CREATE TEMPORARY TABLE fk2_tmp_tbl AS (
SELECT *
FROM fetch_fk2_original_record_from_parent
)
CREATE TEMPORARY TABLE fk3_tmp_tbl AS (
SELECT *
FROM fetch_fk3_original_record_from_parent
)
CREATE TEMPORARY TABLE fk4_tmp_tbl AS (
SELECT *
FROM fetch_fk4_original_record_from_parent
);
END LOOP;
I know the 4 CREATE TEMPORARY TABLE
statements definitely won't work, (can I create 4 temp tables simultaneously/at once?) . Does anyone see the logic of what I am trying to do here and can help me?
PainIsAMaster
(131 rep)
Oct 9, 2021, 04:01 AM
• Last activity: Apr 7, 2025, 12:15 AM
1
votes
1
answers
80
views
Is pg_attribute bloat from temp table creation still a potential issue in modern versions of PostgreSQL?
I am not an expert in PostgreSQL, however I recently ran across some PostgreSQL code during a code review which creates and drops temporary tables in a manner that seems consistent with the typical way we do that in SQL Server. i.e. in SQL Server we'd do: ``` DROP TABLE IF EXISTS #temp_data; CREATE...
I am not an expert in PostgreSQL, however I recently ran across some PostgreSQL code during a code review which creates and drops temporary tables in a manner that seems consistent with the typical way we do that in SQL Server. i.e. in SQL Server we'd do:
DROP TABLE IF EXISTS #temp_data;
CREATE TABLE #temp_data
(
i int NOT NULL
);
The code review showed this code:
BEGIN;
DROP TABLE IF EXISTS temp_data;
COMMIT;
BEGIN;
CREATE TEMP TABLE IF NOT EXISTS temp_data(
i int NOT NULL
);
COMMIT;
[This answer on Stack Overflow](https://stackoverflow.com/a/55580279/1595565) claims you shouldn't drop and recreate temp tables frequently because of pg_attribute
bloat.
ChatGPT (gah!) has this to say about pg_attribute
bloat:
> In PostgreSQL, creating temporary tables frequently can cause bloat in pg_attribute
, as each new temporary table adds metadata that persists in the system catalogs even after the table is dropped. To avoid excessive bloat, consider these best practices:
>
> 1. Use ON COMMIT DELETE ROWS
Instead of Dropping Tables:
>
> > CREATE TEMP TABLE temp_data (
> i int NOT NULL
> ) ON COMMIT DELETE ROWS;
>
>
> 2. Use pg_temp
Schema for Session-Level Temporary Tables
>
> > CREATE TEMP TABLE pg_temp.temp_data (
> id SERIAL PRIMARY KEY,
> value TEXT
> );
>
Since I am very doubtful about the veracity of any claims made by any large language model, which approach should I choose? Indeed, is either approach even valid?
Hannah Vernon
(70988 rep)
Mar 13, 2025, 10:21 PM
• Last activity: Mar 14, 2025, 07:58 AM
14
votes
1
answers
31827
views
Index usage on a temporary table
I have two rather simple queries. The first query UPDATE mp_physical SET periodic_number = '' WHERE periodic_number is NULL; and it's plan duration: 0.125 ms plan: Query Text: UPDATE mp_physical SET periodic_number = '' WHERE periodic_number is NULL; Update on mp_physical (cost=0.42..7.34 rows=1 wid...
I have two rather simple queries. The first query
UPDATE mp_physical SET periodic_number = '' WHERE periodic_number is NULL;
and it's plan
duration: 0.125 ms plan:
Query Text: UPDATE mp_physical SET periodic_number = '' WHERE periodic_number is NULL;
Update on mp_physical (cost=0.42..7.34 rows=1 width=801)
-> Index Scan using "_I_periodic_number" on mp_physical (cost=0.42..7.34 rows=1 width=801)
Index Cond: (periodic_number IS NULL)
And the second one:
UPDATE observations_optical_temp SET designation = '' WHERE periodic_number is NULL;
and it's plan:
duration: 2817.375 ms plan:
Query Text: UPDATE observations_optical_temp SET periodic_number = '' WHERE periodic_number is NULL;
Update on observations_optical_temp (cost=103.55..9223.01 rows=5049 width=212)
-> Bitmap Heap Scan on observations_optical_temp (cost=103.55..9223.01 rows=5049 width=212)
Recheck Cond: (periodic_number IS NULL)
-> Bitmap Index Scan on "_I_per_num_temp" (cost=0.00..102.29 rows=5049 width=0)
Index Cond: (periodic_number IS NULL)
I expect the second plan shold be the same the first one. But it's not. Why?
Here are dumps of the tables.
CREATE TABLE public.mp_physical(
id_mpp serial NOT NULL,
id_comet_parts integer,
"SPK_id" public.nonnegative_int,
designation varchar(30),
name varchar(100),
prefix varchar,
"is_NEO" bool,
"H" double precision,
"G" double precision,
diameter public.nonnegative_double,
extent varchar(30),
extent_error public.nonnegative_double,
geometric_albedo public.nonnegative_double,
rot_per public.nonnegative_double,
"GM" public.nonnegative_double,
"BV" public.nonnegative_double,
"UB" public.nonnegative_double,
"spec_B" varchar(30),
"spec_T" varchar(30),
lca double precision,
multiplicity public.nonnegative_int,
polar_ang double precision,
polar_slope_ang double precision,
a double precision,
b double precision,
mass public.nonnegative_double,
mp_type public.mp_type NOT NULL,
periodic_number varchar(5),
diameter_method_def varchar(200),
discovery_info text,
"H_sigma" public.nonnegative_double,
"G_sigma" public.nonnegative_double,
diameter_sigma public.nonnegative_double,
geometric_albedo_sigma public.nonnegative_double,
rot_per_sigma public.nonnegative_double,
"GM_sigma" public.nonnegative_double,
"BV_sigma" public.nonnegative_double,
"UB_sigma" public.nonnegative_double,
lca_sigma public.nonnegative_double,
a_sigma public.nonnegative_double,
b_sigma public.nonnegative_double,
polar_ang_sigma public.nonnegative_double,
mass_sigma public.nonnegative_double,
CONSTRAINT "_C_id_ap" PRIMARY KEY (id_mpp)
);
CREATE INDEX "_I_name" ON mp_physical USING btree (name);
CREATE INDEX "_I_designation" ON mp_physical USING btree(mpp_designation);
CREATE INDEX "_I_periodic_number" ON mp_physical USING btree(periodic_number);
CREATE INDEX "_I_mp_type" ON mp_physical USING btree(mp_type);
And
CREATE TEMPORARY TABLE "observations_optical_temp"(note_1,date,"RA","Dec",magnitude,band,id_observatory,id_mpp,"Dec_degree",observatory_code,periodic_number,mpp_designation,mp_type)
AS SELECT note_1,date,"RA","Dec",magnitude,band,id_observatory,id_mpp,"Dec_degree",'1'::varchar(3),'1'::varchar(8),'1'::varchar(30),'A'::public.mp_type FROM observations_optical;
CREATE TABLE observations_optical(
id_obs_o bigint
note_1 varchar,
date timestamp NOT NULL,
"RA" time NOT NULL,
"Dec_degree" integer NOT NULL,
"Dec" time NOT NULL,
magnitude double precision,
band varchar,
id_observatory integer,
id_mpp integer,
CONSTRAINT "_PK_id_obs_o" PRIMARY KEY (id_obs_o)
);
CREATE INDEX "_I_temp_1" ON observations_optical_temp USING btree(mpp_designation);
CREATE INDEX "_I_temp_2" ON observations_optical_temp USING btree(periodic_number);
CREATE INDEX "_I_temp_3" ON observations_optical_temp USING btree(mp_type);
Artem Zefirov
(253 rep)
Oct 26, 2016, 05:44 PM
• Last activity: Feb 27, 2025, 02:13 AM
0
votes
0
answers
30
views
Is there a limit to how many temporary tables MySQL can handle?
I manage a web service. The service has multiple endpoints, and most of these endpoints simply return some data representing an entity (such as a track, genre or composer). Each endpoint usually supports sorting and filtering through a number of parameters. Let's assume an endpoint which returns a l...
I manage a web service. The service has multiple endpoints, and most of these endpoints simply return some data representing an entity (such as a track, genre or composer). Each endpoint usually supports sorting and filtering through a number of parameters.
Let's assume an endpoint which returns a list of tracks. A track can be associated with one or more genres and one or more composers.
The endpoint will first issue a query such as :
1.
select {fields} from track t where t.album_id = {album_id} order by t.number limit {offset}, {limit}
The service also needs to load associated genres and composers. For this, the applications extracts the track ids from the first query and executes :
2. select {fields} from genre_track gt inner join genre g on gt.genre_id = g.id where gt.track_id in {track_ids}
3. select {fields} from composer_track ct inner join composer c on ct.composer_id = c.id where ct.track_id in {track_ids}
This has worked well. The initial query takes care of filtering, sorting and range - and the subsequent queries loads additional data.
However, I'm considering using temporary tables in the following way :
1. insert into temptable (id) select id from track t where t.album_id = {album_id} order by t.number limit {offset}, {limit}
2. select {fields} from temptable tmp left join track t on tmp.id = t.id
3. select {fields} from genre_track gt inner join genre g on gt.genre_id = g.id inner join temptable tmp on tmp.id = gt.track_id
4. select {fields} from genre_track gt inner join genre g on gt.genre_id = g.id inner join temptable tmp on tmp.id = ct.track_id
I do know that this approach will result in one additional query (the query which does the initial select and insert resulting ids into a temporary table). I don't expect that to give a lot of overhead.
The underlying assumption is, however, that a join on a temporary table with index will be faster than doing where id in {track_ids}
on e.g 2000 track ids.
A typical query will load 100 tracks, but often the number of tracks to be loaded is somewhere between 500 and 5000.
Question :
The number of temporary tables in use may become large. Could this potentially become a problem?
sbrattla
(193 rep)
Feb 25, 2025, 02:57 PM
• Last activity: Feb 25, 2025, 03:02 PM
15
votes
1
answers
746
views
What could cause a SQL Server non-cached temporary table to NOT trigger a recompile when a large amount of rows have changed?
I have observed (and reproduced) the following scenario with SQL Server 2022. ### The pattern in use * code is executed via sp_executesql (no stored procedure is involved) * The first query selects data into a temporary table * A DDL statement then creates a clustered index on the temporary table. T...
I have observed (and reproduced) the following scenario with SQL Server 2022.
### The pattern in use
* code is executed via sp_executesql (no stored procedure is involved)
* The first query selects data into a temporary table
* A DDL statement then creates a clustered index on the temporary table. The temporary table is definitely NOT cacheable-- first of all this isn't a module (sproc or function), but also we're creating an index after the temp table is populated. So I would not expect statistics left behind on a cached temporary object to be involved at all here.
* A query selects data from the temporary table. This query gets FULL optimization each time (not a TRIVIAL plan)
This batch can run for both small and larger datasets, so that temp table can have 1 row in it or many thousands of rows.
This behavior normally occurs on a readable secondary. There is no writable query store and no automatic plan forcing as a factor.
I have verified that I can reproduce the behavior against the primary replica as well. (Automatic plan correction was told to ignore the query and I confirmed no plan forcing on primary when reproduced.)
### Repro script
* [Setup script](https://gist.github.com/LitKnd/83130120683e669669c048a5e74c8145#file-setuprepro-sql) - I ran this on SQL Server 2022 CU15. This turns off query store and uses compat level 130.
* [Repro query](https://gist.github.com/LitKnd/83130120683e669669c048a5e74c8145#file-repro-sql) - I've been running this via SQL Query Stress so I can easily run it concurrently on one or more threads
* [Plan Generation Num and temp tables](https://gist.github.com/LitKnd/83130120683e669669c048a5e74c8145#file-plan_generation_num_and_temp_tables-sql) - A very simple query to observe the plan_generation_num in sys query stats (" A sequence number that can be used to distinguish between instances of plans after a recompile.") and the current list of temp tables
### What normally happens-- and the behavior I expect
Normally, changing large amounts of rows in the temporary table between query executions automatically cause recompiles, and I'll see that the query selecting the data from the temporary table has a row estimate matching the rows in the temp table.
When this works as expected, performance is fine.
*With the repro query*: If I clear the plan cache, then run the repro query 40 iterations on a single thread in SQL Query Stress, plan_generation_number ends up being 82. When sampling query plans with sp_WhoIsActive, row estimates querying the temp table match the number of rows in the temp table, as expected.
### What sometimes happens -- and looks like a bug to me
On rare occasions, I see that a plan is in use where there is a 1 row estimate plan for the temp table, but a very large amount of rows are actually in the temp table. A LOT of rows have changed, but it didn't automatically recompile:
This leads to very slow performance because the low estimate plan decides to use a nested loop without prefetching, which makes it a CPU burner.
*With the repro query*: If I clear the plan cache, then run the repro query 20 iterations on 2 threads in SQL Query Stress, plan_generation_number ends up less than 82-- it varies by run, but might be 72 or 59, indicating fewer recompiles. While this is running, I can also sample occasions with sp_WhoIsActive where there is a single rowcount estimated but many more rows in the temporary table. Screenshot:
### I can only reproduce this when running repro code on multiple concurrent sessions
I have not been able to reproduce this behavior with a single session in SQL Server. The only way I can reproduce this is to set up a block of code that:
* Executes at least 1 iteration of the sp_executesql statement that have 1 row in the temp table
* Then executes 1 iteration of the sp_executesql statement that has a lot more rows in the temp table
If I run this in a single session, I have not been able to reproduce the problems. But if I run this concurrently in four or five sessions, I'll be able to occasionally get the "THAT DIDN'T RECOMPILE LIKE IT SHOULD HAVE" issue to pop up. (Note: using SQL Query Stress, I can repro this with only 2 sessions/iterations.)
This feels like a bug to me, I'm curious if anyone else has seen it. Recompile and stats behavior with temp tables is super complex tho, so there may be some nuance that I'm missing with how this works with non-cachable temp tables.
PS: I do think cachable temp tables are generally better. I'm just trying to figure out why this behavior would happen in a non-cacheable temp table scenario at this point.
### Workarounds
After adding an


option (recompile)
to the query, I can no longer reproduce the reuse of the 1 row plan querying the temp table. This is sufficient, but I'm puzzled why it is necessary.
Kendra Little
(938 rep)
Dec 6, 2024, 03:32 PM
• Last activity: Dec 10, 2024, 09:00 AM
-1
votes
1
answers
117
views
OBJECT_ID fails to find existing temp table
I'm using temp tables to stage data before inserting into production. As a couple of things use them, before anything else I do a: IF OBJECT_ID(N'##Projects_TEMP') IS NOT NULL TRUNCATE TABLE ##Projects_TEMP ELSE SELECT * INTO ##Projects_TEMP FROM Projects WHERE 1=0 The idea is to clear the table if...
I'm using temp tables to stage data before inserting into production. As a couple of things use them, before anything else I do a:
IF OBJECT_ID(N'##Projects_TEMP') IS NOT NULL
TRUNCATE TABLE ##Projects_TEMP
ELSE
SELECT * INTO ##Projects_TEMP FROM Projects WHERE 1=0
The idea is to clear the table if it's there, and make the table if it's not. I believe that T-SQL should do that?
However:
select OBJECT_ID(N'##Projects_TEMP')
always returns NULL. This is surprising, because the table *does* exist in that connection (I did so manually in SSMS to be sure), and:
select * from ##Projects_TEMP
Works fine. What am I missing here?
Maury Markowitz
(230 rep)
Dec 6, 2024, 05:52 PM
• Last activity: Dec 6, 2024, 05:59 PM
0
votes
1
answers
527
views
MariaDB tmpdir & table copy operations
I have a Linux server & MariaDB v10.0.33, where tmpdir is tmpfs (ram), for fast temporary table access; tmpfs is obviously limited in size. MariaDB seems to use the same tmpdir location for alter table operations (table copy), that require a table copy; these can require large amounts of space if ta...
I have a Linux server & MariaDB v10.0.33, where tmpdir is tmpfs (ram), for fast temporary table access; tmpfs is obviously limited in size.
MariaDB seems to use the same tmpdir location for alter table operations (table copy), that require a table copy; these can require large amounts of space if tables are many Gb.
Old MySQL versions configure 'table copy' and 'temp table' separately;
Table copy written to tablespace & temporary tables written to tmpdir.
I am aware I can set 'innodb_tmpdir' per command.
Can MariaDB configured so that table copy & temporary tables use different locations?
user120053
(9 rep)
Jan 24, 2018, 11:12 AM
• Last activity: Sep 30, 2024, 06:08 PM
0
votes
0
answers
43
views
Temporary relation files not removed after sessions ends - Single PostgreSQL instance PostgreSQL 13
We are facing default temporary tablespace disk space full issue due to lots of below temporary relations files created but not removed after ending session ``` 1.1G t756_1536213694.409 1.1G t756_1536213694.410 1.1G t756_1536213694.411 1.1G t756_1536213694.412 1.1G t756_1536213694.413 1.1G t756_1536...
We are facing default temporary tablespace disk space full issue due to lots of below temporary relations files created but not removed after ending session
1.1G t756_1536213694.409
1.1G t756_1536213694.410
1.1G t756_1536213694.411
1.1G t756_1536213694.412
1.1G t756_1536213694.413
1.1G t756_1536213694.414
Please note: - these files are not in pgsql_tmp <-- this is used for temporary files like sorting etc. these file are created in temp_tablespace location which default tablespace currenlty.
spin=# show temp_tablespaces ;
temp_tablespaces
------------------
(1 row)
I tried to get the backend id from below query but no luck as process id is changing every time when execute the below query .
spin=# SELECT bid, pg_stat_get_backend_pid(bid) AS pid FROM pg_stat_get_backend_idset() bid where bid='756';
bid | pid
-----+-------
756 | 41533
(1 row)
spin=# SELECT bid, pg_stat_get_backend_pid(bid) AS pid FROM pg_stat_get_backend_idset() bid where bid='756';
bid | pid
-----+-------
756 | 86639
(1 row)
Questions: 1. Can we remove directly ?
2. are these files are removed once db restart ?
3. How WE CAN find which session created these temp relations which is not get removed once session ends ?
thanks for your help.
Adam Mulla
(143 rep)
Sep 2, 2024, 12:32 PM
• Last activity: Sep 3, 2024, 11:50 AM
6
votes
3
answers
30599
views
Copying CSV file to temp table with dynamic number of columns?
I am wondering if there is a way to copy a csv file into a temp table where the number of columns is unknown in the csv file. The DB software I'm using is PgAdmin III. I figured out that if I do know the number of columns then I can create a temp table with that amount of columns and then copy the c...
I am wondering if there is a way to copy a csv file into a temp table where the number of columns is unknown in the csv file. The DB software I'm using is PgAdmin III. I figured out that if I do know the number of columns then I can create a temp table with that amount of columns and then copy the csv file in like so:
CREATE TEMPORARY TABLE temp
(
col1 VARCHAR(80),
col2 VARCHAR(80),
....
coln VARCHAR(80)
);
COPY temp FROM 'C:/Users/postgres/Boost.txt' CSV HEADER DELIMITER E' '
However if I try to just simply copy the csv file to a temp table with no columns in the temp table, Postgresql (version 8.4) complains that I'm working with a table that has less columns than in the csv file. I've been researching and can't seem to find anything in the Postgresql docs about this. Does anyone know if copying a csv file into a temp table with an arbitrary number of columns decided at run time is possible in Postgresql? Once the temp table is loaded with the csv file I plan on doing some comparisons to other tables with the temp table before it is destroyed. Also the first row in the csv file contains headers.
Bmoe
(161 rep)
Jun 30, 2015, 09:38 PM
• Last activity: Aug 27, 2024, 07:10 PM
2
votes
1
answers
4019
views
Automatically materialize CTE? (instead of temporary table CREATE and INSERT)
I’m a novice trying to learn about query optimization and temporary tables in Oracle. [The examples I’ve seen][1] for Oracle temporary tables involve CREATE TABLE and INSERT INTO statements. > CREATE PRIVATE TEMPORARY TABLE ora$ppt_temp1( > id INT, > description VARCHAR2(100) > ) ON COMMIT DROP DEFI...
I’m a novice trying to learn about query optimization and temporary tables in Oracle.
The examples I’ve seen for Oracle temporary tables involve CREATE TABLE and INSERT INTO statements.
> CREATE PRIVATE TEMPORARY TABLE ora$ppt_temp1(
> id INT,
> description VARCHAR2(100)
> ) ON COMMIT DROP DEFINITION;
>
> INSERT INTO ora$ppt_temp1(id,description)
> VALUES(1,'Transaction-specific private temp table');
In my case (IBM Maximo), my queries are pure SELECT statements (views, etc.), so I can’t tack on other statements like CREATE TABLE or INSERT INTO.
But I came across a blog about SQL Performance that describes some related functionality in PostgreSQL that sounds useful:
> PostgreSQL 12+ automatically materializes (allocates the output into memory) CTEs, which are called more than once.
To my untrained eye, the idea of automatic materialization seems appealing. It could help me avoid running CTEs multiple times unnecessarily.
Is there a way to do that sort of thing in Oracle? (avoiding the need for CREATE TABLE and INSERT INTO statements)
User1974
(1527 rep)
Sep 22, 2021, 04:58 PM
• Last activity: Aug 2, 2024, 01:47 PM
Showing page 1 of 20 total questions