Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
2 answers
706 views
Why we can FLASHBACK DROP table where it is DDL command and hence autocommited?
DROP and TRUNCATE both are DDL commands and both are autocommited.Then why we can recover DROP table by using FLASHBACK(unless it is purge) and why we can not recover TRUNCATED data(don't consider backup).
DROP and TRUNCATE both are DDL commands and both are autocommited.Then why we can recover DROP table by using FLASHBACK(unless it is purge) and why we can not recover TRUNCATED data(don't consider backup).
Paresh (81 rep)
Aug 18, 2015, 05:19 AM • Last activity: Apr 7, 2025, 03:03 PM
1 votes
1 answers
384 views
Allow user to drop only tables / views created by that user in MySQL
Is it possible to allow a user to *only* drop tables and views that he/she created in MySQL? How is this achieved?
Is it possible to allow a user to *only* drop tables and views that he/she created in MySQL? How is this achieved?
Giovanni Colitti (131 rep)
Jan 20, 2021, 04:54 PM • Last activity: Apr 3, 2025, 08:14 PM
1 votes
3 answers
1638 views
Replacing MYSQL table with data from another table - no downtime
I have two tables with identical structures: Table1 and Table2 id INT PRIMARY KEY, a TINYINT, b TINYINT, c TINYINT Table1 is actively being used by queries from other sessions; Table2 is not referenced/accessed by anything else. My goal is to end up with just Table1, but having the data from Table2....
I have two tables with identical structures: Table1 and Table2 id INT PRIMARY KEY, a TINYINT, b TINYINT, c TINYINT Table1 is actively being used by queries from other sessions; Table2 is not referenced/accessed by anything else. My goal is to end up with just Table1, but having the data from Table2. (Note that Table2 and Table1 do not have the same set of id-PRIMARY KEY.) What is the best way to do this without risking Table1 not existing momentarily for other sessions? I know I can do something like: DROP TABLE Table1; ALTER TABLE Table2 RENAME Table1; But am not sure how to ensure nothing tries to query Table1 while it doesn't exist or is empty. Do I use transactions or lock the tables or something else? (Ideally, I'd want any other sessions' queries to return either old Table1 or new Table1 data rather than failing.) I am using CodeIgniter in PHP to make the calls. **UPDATE:** Looks like this post answers it using a method from the MySQL RENAME TABLE Documentation: RENAME TABLE Table1 TO Table1_old, Table2 To Table1; And then I can DROP Table1_old at my leisure
Phil W. (11 rep)
Apr 4, 2017, 06:53 PM • Last activity: Feb 7, 2025, 07:06 AM
1 votes
1 answers
3575 views
Reclaim disk space after dropping table(innodb_file_per_table = ON), strange scenario
I know it is often asking question. But I can't find answer for my scenario with specific condition. We use **MySQL** **5.5** We have always(I am sure) innodb_file_per_table = ON option in our mysql config. So I drop **big** table with confidence that file that represent this table will be deleted a...
I know it is often asking question. But I can't find answer for my scenario with specific condition. We use **MySQL** **5.5** We have always(I am sure) innodb_file_per_table = ON option in our mysql config. So I drop **big** table with confidence that file that represent this table will be deleted and disk space will be freed immediately, seems to be this file was deleted(this file not exist in DB directory in datadir, all other DB tables files exist), but in munin(disk usage) graph I don't see that disk space get free(my bad that I didn't notice disk usage before dropping table). Also **ibdata1** seems to be is quite big for this situation(~ 200 GB). 1. There is way to get some additional info about old table (that was dropped) ? 2. There is some way to reduce disks usage without dumping and then restoring DB, maybe create new table with old_name(that was dropped) and try to make ALTERING ? **UPDATE:** 1) ~100-110GB, or 20% of total disk space(ssd disk) 2) files_ | 124151382016 3) 10 tables, but only 1 is big table 4) Prehistory: we had big table(busy, inserts/updates/reads) that continue growing up, but some data in this table became old, and must be deleted, so we run delete queries in infinity loop( DELETE FROM files WHERE condition LIMIT 1000-100000, I don’t know how much rows were deleted with this method, but Innodb_rows_deleted was approx. ~600kk, and Innodb_rows_inserted nearly 1 billion or so) we deleting at night when load is very low — after we calculate that deleting takes too much time, so we use persona pt-archiver to move(only data that we need) from old table to new table(files_), and then DROP old table. Yes, old one table had key constraints to separate table — “ON DELETE CASCADE ON UPDATE CASCADE”(in new table we have not constraints), also was in old table(and in current table)UNIQUE KEY on some table field. I am also now making backup with xtrabackup of current db to check what takes much space in ibdata1 file with innodb_space app. Seems to be only 1 possible way — to make logical backup, and then import, sql file is 40GB, so it would be not fast despite that server pretty powerful(128GB ram). Thank you!
user2746626 (19 rep)
Jun 1, 2015, 01:21 AM • Last activity: Dec 21, 2024, 02:03 AM
0 votes
1 answers
41 views
How can I replicate transactions one by one in mysql?
I have one source server and one replica server. And I set source_delay 30mins, replication is gtid based, mysql version is 8.0.24 1. If I accidentally drop table A, Can I apply transactions just right before the drop transaction?? 2. What is the best solution when I accidentally drop table and I wa...
I have one source server and one replica server. And I set source_delay 30mins, replication is gtid based, mysql version is 8.0.24 1. If I accidentally drop table A, Can I apply transactions just right before the drop transaction?? 2. What is the best solution when I accidentally drop table and I want to restore minimum data loss? I tried restore table using replica's table which have data before 30 mins. So there is no data which is delayed.
제임스으 (1 rep)
Dec 4, 2024, 12:55 AM • Last activity: Dec 4, 2024, 10:16 PM
1 votes
2 answers
107 views
Can a dropped table be recovered from Always On availability group?
Hi I have SQL 2016 with AlwaysOn Availability. I dropped a table on the "active" node. Can I recover the table from the other node or the drop is "synched" automatically and thus the table is dropped on the other node as well?
Hi I have SQL 2016 with AlwaysOn Availability. I dropped a table on the "active" node. Can I recover the table from the other node or the drop is "synched" automatically and thus the table is dropped on the other node as well?
user547 (353 rep)
Sep 16, 2024, 01:58 PM • Last activity: Sep 16, 2024, 04:17 PM
1 votes
1 answers
5824 views
Dropping PostgreSQL partition is blocked as long as a client is reading data from any partition
I have a simple partitioned table in PostgreSQL 14, like this: create table entity ( dataset_id integer not null references dataset (id), ... ) partition by list (dataset_id); create table entity_1 ( like entity including constraints including defaults, constraint entity_1_pkey primary key (entity_i...
I have a simple partitioned table in PostgreSQL 14, like this: create table entity ( dataset_id integer not null references dataset (id), ... ) partition by list (dataset_id); create table entity_1 ( like entity including constraints including defaults, constraint entity_1_pkey primary key (entity_id), ); alter table entity attach partition entity_1 for values in (1); The reason for creating a partition "detached" first and then attaching it in a separate statement is to avoid taking an exclusive lock on the parent table (entity), which would block creating a partition as long as a client was reading from any other partition. This was the solution suggested by **Laurenz Albe** in https://stackoverflow.com/a/67016755/1536933 and it works great for creating partitions. Unfortunately, I have the same problem when *dropping* partitions: as long as a client is reading data from any partition I cannot drop any other partition: not only is drop table entity_1 blocked, but even alter table entity detach partition entity_1 concurrently is blocked! I cannot see any relevant lock in pg_locks for the "detach concurrently" statement (nothing with granted=false), but the pgAdmin dashboard shows Wait event: Lock: virtualxid and Blocking PIDs: (pid of the reading process) What can I do to drop partitions while data is being read from other partitions? Surely this should be possible?
EM0 (250 rep)
May 31, 2022, 02:08 PM • Last activity: Jun 25, 2024, 02:37 PM
0 votes
1 answers
134 views
Postgres drop table (which has no rows) has taken > 20 minutes and isn't finished yet
Before executing the `drop table MY_TABLE`, I checked `count(*) from MY_TABLE` and verified that the row count is 0. There exist no foreign key constraints on other tables which reference MY_TABLE. Yet the drop table query has been running for 23 minutes now. Can someone suggest possible causes?
Before executing the drop table MY_TABLE, I checked count(*) from MY_TABLE and verified that the row count is 0. There exist no foreign key constraints on other tables which reference MY_TABLE. Yet the drop table query has been running for 23 minutes now. Can someone suggest possible causes?
Ben R. (101 rep)
Jun 6, 2024, 09:15 AM • Last activity: Jun 6, 2024, 12:06 PM
-1 votes
1 answers
19859 views
What is the correct way to check and delete temp table?
Approach 1: IF OBJECT_ID('tempdb..#MyTempTbl') IS NOT NULL DROP TABLE #MyTempTbl; Approach 2: IF EXISTS (SELECT * FROM [tempdb].[sys].[objects] WHERE [name] = N'#MyTempTbl') DROP TABLE [#MyTempTbl]; What is the correct way to check and delete temp table? The context is a stored procedure invoked by...
Approach 1: IF OBJECT_ID('tempdb..#MyTempTbl') IS NOT NULL DROP TABLE #MyTempTbl; Approach 2: IF EXISTS (SELECT * FROM [tempdb].[sys].[objects] WHERE [name] = N'#MyTempTbl') DROP TABLE [#MyTempTbl]; What is the correct way to check and delete temp table? The context is a stored procedure invoked by an agent job. I have tried querying the [tempdb].[sys].[objects] and notice that the global temp table gets the same name, where as a local temp table gets name with underscores at the end like this MyTempTbl______. So I was wondering whether there is a standard way to check if temp table exists and if so to drop it, I am looking for syntax that would work for both local and global temp tables.
variable (3590 rep)
Jan 31, 2023, 03:27 PM • Last activity: Mar 15, 2024, 09:25 AM
5 votes
3 answers
13788 views
Speed difference between Drop table and Truncate table in Postgres
I currently have a program that inserts into a database by creating temp tables, filling the tables, then merging that data into the main tables. Then dropping the tables and doing it all again. I'm wondering what the speed difference is if instead of drop and create, if I just truncate.
I currently have a program that inserts into a database by creating temp tables, filling the tables, then merging that data into the main tables. Then dropping the tables and doing it all again. I'm wondering what the speed difference is if instead of drop and create, if I just truncate.
Francis Lewis (150 rep)
Mar 25, 2019, 09:51 PM • Last activity: Nov 13, 2023, 04:12 AM
20 votes
7 answers
119101 views
How to drop multiple tables with common prefix in one query?
I'm using Microsoft SQL Server 2008.My question is:How to drop multiple tables with common prefix in one query? something like that table names: LG_001_01_STLINE, LG_001_02_STFICHE
I'm using Microsoft SQL Server 2008.My question is:How to drop multiple tables with common prefix in one query? something like that table names: LG_001_01_STLINE, LG_001_02_STFICHE
Cell-o (1106 rep)
Sep 8, 2011, 02:18 PM • Last activity: Jul 5, 2023, 10:56 AM
1 votes
1 answers
584 views
How to drop a table referincing a very trafficated table easily?
Let's say I have this two tables ````sql CREATE TABLE users ( id_user serial primary key, name text NOT NULL ); ```` ````sql CREATE TABLE useless_table ( id serial primary key, id_user int4 NOT NULL, CONSTRAINT useless_table_id_user_fk FOREIGN KEY (id_user) REFERENCES users(id_user) ); ```` `users`...
Let's say I have this two tables
`sql
CREATE TABLE users (
	id_user serial primary key,
	name text NOT NULL
);
`
`sql
CREATE TABLE useless_table (
	id serial primary key,
	id_user int4 NOT NULL,
	CONSTRAINT useless_table_id_user_fk FOREIGN KEY (id_user) REFERENCES users(id_user)
);
` users is the most congested table in all the database, with queries going non-stop, even long and important ones, so I just cannot kill all of them. I want to drop useless_table that, as you can see, has a foreign key referencing users, and every time I launch the DROP command it gets a lock with even the simplest SELECT on users (there aren't any query going on useless_table). I even tried to drop the foreign key on useless_table but it gets the same locks as the drop of the table. Maybe I'm dreamer, but is there a way to don't make it lock with everything and just drop useless_table? I'm thinking about NOT VALID foreign keys, disabling triggers, deferrable constraints or things of this type, but I'm not very savvy about them and don't want to make a mess using them. Any suggestion?
Sotis (328 rep)
Apr 17, 2023, 01:17 PM • Last activity: Apr 17, 2023, 02:47 PM
2 votes
0 answers
790 views
MariaDB: Terminate long-running clearnup process?
This is a follow up to my [previous question](https://dba.stackexchange.com/questions/325259/mariadb-large-insert-select-query-failed-without-generating-an-error) about a long-running `INSERT SELECT` query that may have exceeded the `max_statement_time` and was `Killed`. It has been in the `Reset fo...
This is a follow up to my [previous question](https://dba.stackexchange.com/questions/325259/mariadb-large-insert-select-query-failed-without-generating-an-error) about a long-running INSERT SELECT query that may have exceeded the max_statement_time and was Killed. It has been in the Reset for next command state for 3.5 days. I think it is rolling back changes on the table I was inserting into. However, that table was empty to begin with, and I would rather terminate this cleanup process, drop the table, and start fresh than continue waiting for it to finish. But I do not know if that would be safe for the general health of the database. The table that I was copying into had no keys, and thus no foreign_key constraints. It was a bare table that I was planning to index once populated. What is the quickest way for me to get back to work without doing more harm?
Id   User         Host           db Command    Time                   State
10237   root    localhost   singlecell  Killed  300819  Reset for next command
Chris Cox (43 rep)
Mar 28, 2023, 02:30 PM
1 votes
1 answers
816 views
Need a PL/SQL code to drop many partition in 1 single SQL statements
I have a requirement where we need to drop historical partitions which are not dropped . Requirement is dropping multiple partitions in one step instead of dropping one by one. like ALTER TABLE sales DROP PARTITION sales_q1_2008, sales_q2_2008, sales_q3_2008, sales_q4_2008 update global indexes; ins...
I have a requirement where we need to drop historical partitions which are not dropped . Requirement is dropping multiple partitions in one step instead of dropping one by one. like ALTER TABLE sales DROP PARTITION sales_q1_2008, sales_q2_2008, sales_q3_2008, sales_q4_2008 update global indexes; instead of ALTER TABLE sales DROP PARTITION sales_q1_2008 update global indexes; ALTER TABLE sales DROP PARTITION sales_q2_2008 update global indexes; ALTER TABLE sales DROP PARTITION sales_q3_2008 update global indexes; and so on... app team has this code:- CREATE OR REPLACE PROCEDURE Test_user.DROP_PARTITIONS (p_table_name in varchar2, p_partition_key in number) is pragma autonomous_transaction; begin for cur in ( select partition_name from user_tab_partitions where table_name = p_table_name and substr(partition_name,1,8) <= 'P_'||trim(to_char(p_partition_key,'000000')) order by partition_position desc ) loop -- wait for processes which might have a lock on the partition to drop (max 10min) execute immediate 'lock table '||p_table_name||' partition ('||cur.partition_name||') in exclusive mode wait 600'; execute immediate 'alter table '||p_table_name||' drop partition '||cur.partition_name || ' UPDATE GLOBAL INDEXES'; end loop; end; / But I guess this is doing serial drop for each partitions thereby creating many alter statements instead of 1 step command where all the necessary partitions are dropped. How can we adapt this ?
Dhiren Singh (13 rep)
Mar 23, 2023, 03:08 PM • Last activity: Mar 23, 2023, 09:04 PM
0 votes
1 answers
95 views
MySQL: Dropping table with specific table prefix from the Database using Terminal
I am trying to delete database tables from PhpMyAdmin using a Terminal (Guake Terminal) with a specific table prefix. I am trying to delete tables with the prefix **wordpress2_** at the beginning of it. These are the format of the codes I used but it seems like it is not working. Could you possibly...
I am trying to delete database tables from PhpMyAdmin using a Terminal (Guake Terminal) with a specific table prefix. I am trying to delete tables with the prefix **wordpress2_** at the beginning of it. These are the format of the codes I used but it seems like it is not working. Could you possibly help me with this? mysql -B DatabaseName -u DatabaseUser -pDatabasePassword -e "SELECT CONCAT('DROP TABLE ', TABLE_SCHEMA, '.', TABLE_NAME, ';') FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME LIKE 'wordpress2_%' AND TABLE_SCHEMA = 'DatabaseName';" Thank you very much and your help is highly appreciated.
SkyCross12 (1 rep)
Feb 8, 2023, 04:06 AM • Last activity: Feb 8, 2023, 05:13 AM
60 votes
4 answers
155943 views
How to check foreign keys related to a table
How to see foreign keys related to a table in MySql? **Background** : I wanted to drop a table in MySql which has a foreign key constraint. When I do it I get this: `Error Code: 1217. Cannot delete or update a parent row: a foreign key constraint fails` >How can I drop foreign keys related to the ta...
How to see foreign keys related to a table in MySql? **Background** : I wanted to drop a table in MySql which has a foreign key constraint. When I do it I get this: Error Code: 1217. Cannot delete or update a parent row: a foreign key constraint fails >How can I drop foreign keys related to the table leaving others.
Mr.X (921 rep)
May 24, 2015, 03:48 PM • Last activity: Dec 8, 2022, 10:13 AM
3 votes
1 answers
2714 views
Can't do simple query or drop table due to out of memory error, but pg_locks return no lock
I received this out of memory error when trying to drop a large table (70GB including index) but couldn't identify the root cause from `pg_locks` view ``` DROP TABLE big_table; ERROR: out of shared memory HINT: You might need to increase max_locks_per_transaction. ``` I had this error several times...
I received this out of memory error when trying to drop a large table (70GB including index) but couldn't identify the root cause from pg_locks view
DROP TABLE big_table;

ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
I had this error several times when updating this table and has increased max_locks_per_transaction from 64 to 100. The updating went smoothly and I could do some query on it with no issue for a while. But a day later, I couldn't do anything with this table anymore. For some reasons, now I have to drop this table but can't do it due to this error, even after I increased max_locks_per_transaction to 150 (the current max_connections is 20). The strange thing is when I tried to find if there's any lock, I didn't note any unusual query.
SELECT relation::regclass, * FROM pg_locks WHERE not granted;
(0 rows)
I could try increase max_locks_per_transaction again but it seems there's other issue that I need to address. Also, this might not be relevant but just in case it's helpful: the script to update the table in question (big_table) had a query to another table which is updated hourly. Anyway, the transaction to update big_table only ran once and has ended by the time I tried to drop it. Both tables are hypertables created with timescaledb. Below is the full view from pg_locks in case it's helpful.
SELECT * FROM pg_locks;

-[ RECORD 1 ]------+----------------
locktype           | relation
database           | 16384
relation           | 12143
page               |
tuple              |
virtualxid         |
transactionid      |
classid            |
objid              |
objsubid           |
virtualtransaction | 4/33
pid                | 344280
mode               | AccessShareLock
granted            | t
fastpath           | t
-[ RECORD 2 ]------+----------------
locktype           | virtualxid
database           |
relation           |
page               |
tuple              |
virtualxid         | 4/33
transactionid      |
classid            |
objid              |
objsubid           |
virtualtransaction | 4/33
pid                | 344280
mode               | ExclusiveLock
granted            | t
fastpath           | t
Hannah (31 rep)
Aug 4, 2021, 07:47 AM • Last activity: Nov 25, 2022, 07:01 PM
11 votes
2 answers
44079 views
DROP TABLE ... CASCADE does not drop any dependent table
I am still relatively new to postgres and I was just finishing an assignment when I decided to try something which was to drop a table which other tables depend on due to a foreign key relationship. My expectation was that any table that had a foreign key reference to the table I am dropping will ha...
I am still relatively new to postgres and I was just finishing an assignment when I decided to try something which was to drop a table which other tables depend on due to a foreign key relationship. My expectation was that any table that had a foreign key reference to the table I am dropping will have (at least) all their rows deleted and the table dropped as well. This was not the case. The tables are still there and even though I specified CASCADE, nothing happened to the rows of those tables. However using TRUNCATE ... CASCADE on the referenced table does in fact remove the rows in the other tables ***EDIT:*** Ok, I understand it does this because this is the way it has been specified by the documentation, but my question now is, when is this ever expected behaviour? - DROPing a domian with CASCADE removes every column in any table that uses that domain. - DROPing a table with a view attached to it will delete the view. **Why does it make sense that dropping a table which is referenced by other tables does not delete the tables that reference this one as well? Convienience?**
smac89 (215 rep)
Oct 7, 2015, 07:44 AM • Last activity: Oct 5, 2022, 06:26 AM
1 votes
2 answers
1364 views
Speedup DROP huge MySQL replicated table
I've a **huge InnoDB** table `customer_users_old` (**150G**) that it's not in use anymore in the database, it was renamed from `customer_users`. My setup has a **Master-Slave replication** of MySQL 5.5 with `innodb_file_per_table` activated, so the table has it's own files. As the `DROP` command tak...
I've a **huge InnoDB** table customer_users_old (**150G**) that it's not in use anymore in the database, it was renamed from customer_users. My setup has a **Master-Slave replication** of MySQL 5.5 with innodb_file_per_table activated, so the table has it's own files. As the DROP command takes a lot of time, I'm afraid it can have some inpact on the prodcution database. So I'm thinking on **disaling unique and foreign key checks for the session** as it was suggested on DROP TABLE on a huge InnoDB table : SET FOREIGN_KEY_CHECKS=0; SET UNIQUE_CHECKS=0; And **disabling the binary log** to drop the table *first on the master* and *then on the slave* as it was suggested on Deleting data from a large table with replication : SET sql_log_bin = 0; Will this speedup the process and minimize the impact on the servers load?
pconcepcion (137 rep)
May 3, 2017, 10:46 AM • Last activity: Sep 19, 2022, 06:21 AM
1 votes
1 answers
1329 views
Postgres Materialized Views keeps getting dropped every time underlying views is updated
I am using Postgres as my DB and I have a bunch of Materialized Views that are created from a view. The MVs are in Schema B and Views are in Schema A. They are created like create SchemaB.mv1 as select * from SchemaA.vw1. However, every time the underlying view is updated, my materialized view is be...
I am using Postgres as my DB and I have a bunch of Materialized Views that are created from a view. The MVs are in Schema B and Views are in Schema A. They are created like create SchemaB.mv1 as select * from SchemaA.vw1. However, every time the underlying view is updated, my materialized view is being dropped. My underlying view is created through DBT using the run command. According to this documentation , DBT drops and re-creates the views that are created. Because of this drop and re-create process, I am wondering if this causes my MVs drop? If the underlying table/view of the MV is dropped, is the MV also dropped?
user16573033 (11 rep)
May 30, 2022, 06:25 PM • Last activity: May 30, 2022, 08:27 PM
Showing page 1 of 20 total questions