Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
1420
views
MySQLdump Exclude master-data
Is it possible to exclude the master data from a mysql dump? From the manual it seems like the only options are commented out or present. I don't need the data at all because it is going to a static system. Commented out will work but just wondering if `0` or some other value would make it not prese...
Is it possible to exclude the master data from a mysql dump? From the manual it seems like the only options are commented out or present. I don't need the data at all because it is going to a static system. Commented out will work but just wondering if
0
or some other value would make it not present?
>Use this option to dump a master replication server to produce a dump file that can be used to set up another server as a slave of the master. It causes the dump output to include a CHANGE MASTER TO statement that indicates the binary log coordinates (file name and position) of the dumped server. These are the master server coordinates from which the slave should start replicating after you load the dump file into the slave.
>If the option value is 2, the CHANGE MASTER TO statement is written as an SQL comment, and thus is informative only; it has no effect when the dump file is reloaded. If the option value is 1, the statement is not written as a comment and takes effect when the dump file is reloaded. If no option value is specified, the default value is 1.
-https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html#option_mysqldump_master-data
My plan was to use:
mysqldump --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=0 ...
but from the above entry that would put the CHANGE MASTER TO
as an uncommented command.
user3783243
(157 rep)
Feb 28, 2020, 04:59 PM
• Last activity: Aug 6, 2025, 08:08 AM
1
votes
1
answers
819
views
MySQLdump leads to exceeding max connections. Skip-Quick as a solution?
Every night I do a full mysqldump of a 17G DB in MySQL 5.7.32. This worke for years, now I am doing some heavy load on inserts during the night which caused at exactly the backup time the connections to rise to max_connections which led to connections errors. As the Server has enough RAM 64G (30G fr...
Every night I do a full mysqldump of a 17G DB in MySQL 5.7.32. This worke for years, now I am doing some heavy load on inserts during the night which caused at exactly the backup time the connections to rise to max_connections which led to connections errors.
As the Server has enough RAM 64G (30G free), I increased max_connections from 150 to 300 in a first reaction.
However looking at the dump command I found the option -- quick (also enabled by default) which tells me that it is exporting row by row.
--single-transaction --routines --quick --compact
I am thinking of changing this to
--skip-quick.
but dare to change this since I would need to check the restore again and this is very time consuming.
Looking at the connections over time I also noticed that there are some interruptions around that time period. So maybe connections stack up since there is a block during MySQLdump?
MySQL error log shows a large amount of the following error, although not at this time points but continuously throughout the day:
Aborted connection 63182018 to db: 'mydb' user: 'test' host: 'myhost' (Got an error reading communication packets)
How would you approach this problem?



merlin
(323 rep)
Dec 27, 2020, 08:08 AM
• Last activity: Aug 5, 2025, 02:05 AM
0
votes
1
answers
9964
views
mysqldump Unknown table 'COLUMN_STATISTICS' in information_schema (1109)
I am having some difficulty with my MySQL database. It appears the server suffered from a power outage during the night. As a result, MySQL wouldn't start when it reboots. I viewed the error log and saw: > [ERROR] InnoDB: Ignoring the redo log due to missing MLOG_CHECKPOINT > between the checkpoint...
I am having some difficulty with my MySQL database.
It appears the server suffered from a power outage during the night. As a result, MySQL wouldn't start when it reboots.
I viewed the error log and saw:
> [ERROR] InnoDB: Ignoring the redo log due to missing MLOG_CHECKPOINT
> between the checkpoint 322393393 and the end 322394369.
[ERROR] > InnoDB: Plugin initialization aborted with error Generic error
> [ERROR] Plugin 'InnoDB' init function returned er
[ERROR] Plugin > 'InnoDB' registration as a STORAGE ENGINE failed.
[ERROR] Failed > to initialize builtin plugins.
[ERROR] Aborting MySql will only start with
[ERROR] > InnoDB: Plugin initialization aborted with error Generic error
> [ERROR] Plugin 'InnoDB' init function returned er
[ERROR] Plugin > 'InnoDB' registration as a STORAGE ENGINE failed.
[ERROR] Failed > to initialize builtin plugins.
[ERROR] Aborting MySql will only start with
innodb_force_recover=6
At this point I am trying to save the data by running from MySQL Workbench:
mysqldump.exe --defaults-file=#### --user=#### --host=localhost --protocol=tcp --port=59452 --default-character-set=utf8 --single-transaction=TRUE --skip-triggers "BusinessManager"
Workbench shows me this error:
>mysqldump: Couldn't execute 'SELECT COLUMN_NAME, JSON_EXTRACT(HISTOGRAM, '$."number-of-buckets-specified"') FROM information_schema.COLUMN_STATISTICS WHERE SCHEMA_NAME = 'BusinessManager' AND TABLE_NAME = 'Activities';': Unknown table 'COLUMN_STATISTICS' in information_schema (1109)
>
>Operation failed with exitcode 2
However, the MySQL log shows:
>[ERROR] InnoDB: Failed to find tablespace for table BusinessManager
.Activities
in the cache. Attempting to load the tablespace with space id 195
The log error is repeated for each table and the SQL dump fails.
Is there any way to save the data in the database and bypass these errors?
Thanks!
Dave B
(113 rep)
Jul 19, 2018, 01:20 PM
• Last activity: Aug 1, 2025, 02:00 PM
19
votes
7
answers
71951
views
Error while restoring a Database from an SQL dump
I am extremely new to MySQL and am running it on Windows. I am trying to restore a Database from a dumpfile in MySQL, but I get the following error: $ >mysql -u root -p -h localhost -D database -o mysql -u root -p -h localhost -D database --binary-mode -o < dump.sql` but this gave me the following `...
I am extremely new to MySQL and am running it on Windows. I am trying to restore a Database from a dumpfile in MySQL, but I get the following error:
$ >mysql -u root -p -h localhost -D database -o mysql -u root -p -h localhost -D database --binary-mode -o < dump.sql
but this gave me the following
ERROR at line 1: Unknown command '\☻'.`
It is a 500 Mb dump file, and when I view its contents using gVIM, all I can see is expressions and data which is not comprehensible. Also when I try to copy contents from the file to post here all I can copy is :SQLite format 3
This kind of seems strange.
user1434997
(291 rep)
Jun 18, 2013, 12:05 AM
• Last activity: Jul 30, 2025, 02:05 PM
0
votes
1
answers
433
views
How speed up mysql dump without data?
There are a lot of questions ([1][1],[2][2],[3][3]) about speed up MySql restoring. But all of them speed up **data** import. Is there a way to speed up schema tables creation? For example dump iundepended table in parallel and restoring them in parallel too. Btw, `SET FOREIGN_KEY_CHECKS=0` is not a...
There are a lot of questions (1 ,2 ,3 ) about speed up MySql restoring. But all of them speed up **data** import. Is there a way to speed up schema tables creation? For example dump iundepended table in parallel and restoring them in parallel too.
Btw,
SET FOREIGN_KEY_CHECKS=0
is not a solution, becase standart mysql dump creates dump file with that statement.
Cherry
(129 rep)
Mar 18, 2016, 04:25 AM
• Last activity: Jul 23, 2025, 08:04 PM
1
votes
1
answers
56
views
mariadb docker fails to Initialize database contents from dump file
I have a Nextcloud server set up that uses docker image mariadb:10.11. I use mysqldump to back up the database. Where I am having trouble is trying to restore that to a test system as described here https://hub.docker.com/_/mariadb#initializing-the-database-contents. For the restore I added the extr...
I have a Nextcloud server set up that uses docker image mariadb:10.11. I use mysqldump to back up the database. Where I am having trouble is trying to restore that to a test system as described here https://hub.docker.com/_/mariadb#initializing-the-database-contents .
For the restore I added the extra service "adminer" so I could check it out. I also added a new volume to mount the dump file
- /data/blue/nextcloud/db_backup/db_dump_nextcloud.sql:/docker-entrypoint-initdb.d/db_dump_nextcloud.sql
.
I then started it with docker-compose up -d adminer
. On the first try I saw that there were 52 (I think) tables in the nextcloud database so I thought we was good, but when starting nextcloud it was giving me errors about missing tables.
A few days later I got around to trying it again and now it only gets part of one table downloaded. The last time or two I tested by deleting the database volume and then just start the database with docker-compose up db
. I think in the logs I am seeing the data be dumped into one of the tables and then the Lost connection error:
...
db-1 | ('3507453_0','3507453_0',1,3507453,'�\0\0\0\0\0\0\0\0;\0\0\0 c�ZB�J@K�4�T@ �c�J@�]K��T@�>W[��J@�H�}�T@��k ��J@�A�f�T@33333�J@?�ܵ��T@�1��%�J@z�):��T@���<,�J@;pΈ��T@1�*▒��J@�,C��T@/�$��J@��b��T@����J@����M�T@��ڊ��J@b��4��T@+�٦J@�ZӼ��T@|a2U�J@�Q�@��{���J@�9#J{�T@�HP��J@��g���T@4��7��J@�x�&1�T@�����J@��k ��T@�C�l��J@�c�]K�T@gDio��J@2U0*��T@-C���J@_�Q�T@d�]KȯJ@��|г�T@ףp=\n�J@M��St�T@�3���J@�a2�T@-����J@�N@a�T@Ԛ���J@��C�l�T@Gr���J@K�=�U@P�▒sײJ@���oU@\'�W�J@�HPU@6<�R��J@�;NU@���{��J@�E��U@������J@k+���U@�2ı.�J@ޓ��ZU@�A�f�J@�p=\n�U@���ZӬJ@ףp=\nU@�rh���J@��C�lU@ݵ�|ЫJ@�c]�FU@��C��J@�ܵ�|U@�$���J@$(~��U@� ��J@Gr��U@���~��J@▒&S�U@�d�`T�J@t���U@�o_ΩJ@\\���(U@�&S�J@8��d�U@▒&S��J@=\nףpU@�W�2ĩJ@���QIU@S��:�J@���<,U@ףp=\n�J@S�!�uU@���H�J@S�!�uU@�٬�\\�J@?�ܵU@�e�c]�J@
+�U�Zd�J@��|?5U@�z�G�J@A�c�]�T@�HP��J@:#J{��T@�����J@F�����T@����J@?5^�I�T@�����J@�K7�A�T@��DؠJ@$�����T@x
$(�J@-!�l�T@ c�ZB�J@K�4�T@')
db-1 | --------------
db-1 |
db-1 | ERROR 2013 (HY000) at line 316271: Lost connection to server during query
db-1 | /usr/local/bin/docker-entrypoint.sh: line 298: 88 Segmentation fault (core dumped) "$@" --skip-networking --default-time-zone=SYSTEM --socket="${SOCKET}" --wsrep_on=OFF --expire-logs-days=0 --skip-slave-start --loose-innodb_buffer_pool_load_at_startup=0
db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Last binlog file './binlog.000001', position 272159596
db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: 128 rollback segments are active.
db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Starting in background the rollback of recovered transactions
db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Rollback of non-prepared transactions completed
db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1"
db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ...
db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB.
db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: log sequence number 206149883; transaction id 298
db-1 | 2025-07-22 2:31:34 0 [Note] Plugin 'FEEDBACK' is disabled.
db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Cannot open '/var/lib/mysql/ib_buffer_pool' for reading: No such file or directory
db-1 | 2025-07-22 2:31:34 0 [Note] Recovering after a crash using binlog
db-1 | 2025-07-22 2:31:34 0 [Note] Starting table crash recovery...
db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Starting recovery for XA transactions...
db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Transaction 295 in prepared state after recovery
db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Transaction contains changes to 3086 rows
db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: 1 transactions in prepared state after recovery
db-1 | 2025-07-22 2:31:34 0 [Note] Found 1 prepared transaction(s) in InnoDB
db-1 | 2025-07-22 2:31:34 0 [Note] Crash table recovery finished.
db-1 | 2025-07-22 2:31:34 0 [Note] Server socket created on IP: '0.0.0.0'.
db-1 | 2025-07-22 2:31:34 0 [Note] Server socket created on IP: '::'.
db-1 | 2025-07-22 2:31:34 0 [Note] mariadbd: ready for connections.
db-1 | Version: '10.11.13-MariaDB-ubu2204-log' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
w Enable Watch
The relevant part of my compose file is as follows. The db service is set up the same as on the source container except the new volume.
services:
adminer:
image: adminer
restart: always
ports:
- 8080:8080
depends_on:
- db
db:
image: mariadb:10.11
command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW --log_bin_trust_function_creators=1
restart: always
volumes:
- db:/var/lib/mysql
# For mysqldump conf file for backups.
- /home/monterey/confg/restic/db-env.cnf:/app/db-env.cnf:ro
- /data/blue/nextcloud/db_backup/db_dump_nextcloud.sql:/docker-entrypoint-initdb.d/db_dump_nextcloud.sql
environment:
- MARIADB_AUTO_UPGRADE=1
- MARIADB_DISABLE_UPGRADE_BACKUP=1
env_file:
- db.env
redis:
image: redis:alpine
restart: always
app:
#image: nextcloud:apache
build: ./nextcloud
restart: always
volumes:
...
Edit - I did a --no-data dump and was able to restore that. It gave me a 195 tables and took a little over a minute. I did not see it dumping out ever line of sql so I wonder if when it was doing that before it was trying to say there was an error.
JohnT
(11 rep)
Jul 22, 2025, 02:45 AM
• Last activity: Jul 22, 2025, 03:46 PM
0
votes
2
answers
156
views
Possibility on huge MySQL database replication
I have a mysql database with size 2.4TB which has lots of reads and writes happening continuously. I want to replicate this database as a backup. But it's almost impossible to get a dump from this database with it's size. Is it possible to create a slave replica without a dump of the master?
I have a mysql database with size 2.4TB which has lots of reads and writes happening continuously. I want to replicate this database as a backup.
But it's almost impossible to get a dump from this database with it's size. Is it possible to create a slave replica without a dump of the master?
Hasitha
(101 rep)
Jun 12, 2020, 04:18 AM
• Last activity: Jul 20, 2025, 12:08 PM
0
votes
1
answers
159
views
Table data is not getting dumped
This is just for test purpose. I'm taking backup (INNODB DB) with --single-transaction and from another terminal I'm running alter table on one of the table of same DB on which dump command is running.Altered table is getting dumped with altered structure but DATA is missing.I repeated this test 3 t...
This is just for test purpose. I'm taking backup (INNODB DB) with --single-transaction and from another terminal I'm running alter table on one of the table of same DB on which dump command is running.Altered table is getting dumped with altered structure but DATA is missing.I repeated this test 3 times and same result is there. Just wanted to know this is the way it works? Please give some explanation. Thanks in advance.
user89830
(73 rep)
May 5, 2016, 07:38 AM
• Last activity: Jul 17, 2025, 11:08 PM
0
votes
2
answers
3041
views
how to dump via mysqldump hugh table into chunks
i have very hugh table , around 100 M records and 100 GB in a dump file , when i try to restore it it to a different DB i get sql query lost connection , i want to try and dump this table into chunks (something like 10 chinks of 10 GB) where each chink will be in seperate table. what i managed to op...
i have very hugh table , around 100 M records and 100 GB in a dump file , when i try to restore it it to a different DB i get sql query lost connection , i want to try and dump this table into chunks (something like 10 chinks of 10 GB) where each chink will be in seperate table.
what i managed to optimized so far is this :
mysqldump --skip-triggers --compact --no-create-info --single-transaction --quick --max_allowed_packet 1G -h {host} -u {user} -P 3306 -p{pwasword} {my_schema} {}> /mnt/datadir/{table_name}.sql
and now the output is that i have 1 file {table_name}.sql i na size of 100 GB i want to get 10 files in sizes of 10 GB each
Omer Anisfeld
(161 rep)
Jul 7, 2019, 06:29 AM
• Last activity: Jul 7, 2025, 09:07 PM
0
votes
1
answers
170
views
MySQL import silently failing
I'm trying to import a large (30GB) SQL file (generated using mysqldump) into an RDS instance MySQL 5.7 database, from an EC2 instance on the same VPC. It's importing most without a problem, but missing around 5 tables from the end of the dump. Looking around for related logs in the AWS console I ca...
I'm trying to import a large (30GB) SQL file (generated using mysqldump) into an RDS instance MySQL 5.7 database, from an EC2 instance on the same VPC.
It's importing most without a problem, but missing around 5 tables from the end of the dump.
Looking around for related logs in the AWS console I can't really find much of use, all I'm seeing is this:
2021-02-17T12:16:20.168722Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 10360ms. The settings might not be optimal. (flushed=599 and evicted=179, during the time.)
2021-02-17T12:16:36.461195Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 5988ms. The settings might not be optimal. (flushed=490 and evicted=124, during the time.)
2021-02-17T12:16:46.848239Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 6766ms. The settings might not be optimal. (flushed=447 and evicted=0, during the time.)
2021-02-17T12:17:16.371538Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 10380ms. The settings might not be optimal. (flushed=370 and evicted=240, during the time.)
2021-02-17T12:19:22.153786Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 27942ms. The settings might not be optimal. (flushed=299 and evicted=637, during the time.)
2021-02-17T12:21:19.368279Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 16509ms. The settings might not be optimal. (flushed=361 and evicted=136, during the time.)
2021-02-17T12:23:01.420623Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 5097ms. The settings might not be optimal. (flushed=349 and evicted=125, during the time.)
2021-02-17T12:23:16.836488Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 8285ms. The settings might not be optimal. (flushed=307 and evicted=121, during the time.)
2021-02-17T12:23:52.965471Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 19104ms. The settings might not be optimal. (flushed=299 and evicted=607, during the time.)
----------------------- END OF LOG ----------------------
I'm not really sure where to start looking without any logs. Only thing I can think of is maybe lack of memory, but looking at the stats it still had "some" memory left on the RDS instance while it was importing.
When I check the file the CREATE TABLE statements are definitely there for the missing tables.
Can anyone give any tips on how I can figure out what's going wrong?
BT643
(229 rep)
Feb 17, 2021, 12:33 PM
• Last activity: Jul 5, 2025, 06:05 AM
0
votes
1
answers
194
views
Importing MySQL dumps to multiple MySQL instances in one machine
In case of title is not clear, I would like to explain myself further. I have 6 MySQL instances in my server. Let's call them, M1, M2, .., M6. The reason I have 6 MySQL instances here, I have also other 6 servers which is hosting other websites. This server (according to my plan) will be acting as a...
In case of title is not clear, I would like to explain myself further.
I have 6 MySQL instances in my server.
Let's call them, M1, M2, .., M6.
The reason I have 6 MySQL instances here, I have also other 6 servers which is hosting other websites. This server (according to my plan) will be acting as a Disaster Recovery Server.
What my plan is;
- Create replication from each other server to instances in DR Server.
- Each instance will be slaves of other servers MySQL replication.
- Phpmyadmin as GUI for my manager.
What I couldn't figure out is to make instances to be separate from them.
Anytime I import a DB like this;
```mysql -u root -p xxxx -S /var/run/mysqld/xxxx.sock --host xxx.xxx.xx.xxx /var/lib/M1
- M2 > /var/lib/M2
edit for clarification:
My question is, how can I setup a multiple MySQL instances in one machine, and let those MySQL's be a replica member of another servers master. While doing this, by accessing those databases via phpmyadmin, any instances shouldn't access to other data.
M1 replication can access > M1 master's replication
M2 can access > M2 master's.

emir ko
(1 rep)
Oct 1, 2021, 09:24 AM
• Last activity: Jul 3, 2025, 12:06 PM
2
votes
1
answers
173
views
MySQL replication backup
How to take MySQL Master-Master and Master-Slave replication backup?. I have setup `Master-Master` replication with a separate `slave` for each master on Ubuntu. What happens if, 1. I issue or schedule mysqldump on master servers. 2. I allow developers to make changes directly connecting to master s...
How to take MySQL Master-Master and Master-Slave replication backup?. I have setup
Master-Master
replication with a separate slave
for each master on Ubuntu.
What happens if,
1. I issue or schedule mysqldump on master servers.
2. I allow developers to make changes directly connecting to master servers with the GUI tools(workbench).
Any help is appreciated. Thank you!
user53864
(165 rep)
Sep 9, 2012, 01:41 AM
• Last activity: Jul 2, 2025, 12:08 AM
2
votes
2
answers
2438
views
Importing a large mysql dump while replication is running
So we have a simple master/slave mysql replication running between two CentOS Servers. The master has multiple databases. eg.. Database1 Database2 Database3 The issue is we have a mysql dumpfile of a new 60GB database (Database4). What's the best way to import Database4 without breaking replication?...
So we have a simple master/slave mysql replication running between two CentOS Servers.
The master has multiple databases. eg..
Database1
Database2
Database3
The issue is we have a mysql dumpfile of a new 60GB database (Database4).
What's the best way to import Database4 without breaking replication?
I was thinking we could stop replication, and import the mysqldump onto both master and slave. Then restart replication, but was hoping there was an alternate way that would minimize downtime.
user125340
(21 rep)
May 22, 2017, 09:58 PM
• Last activity: Jun 25, 2025, 06:08 PM
4
votes
2
answers
1952
views
Restore MySQL dump to a container fails with ERROR 1114 "The table is full"
I am trying to restore a MySQL dump of size around 18GB to another MySQL server, which is running inside a container using this command: mysql -h example.com -u user -p matomo ERROR 1114 (HY000) at line 7238: The table 'piwik_log_link_visit_action' is full Many other small tables are copied successf...
I am trying to restore a MySQL dump of size around 18GB to another MySQL server, which is running inside a container using this command:
mysql -h example.com -u user -p matomo ERROR 1114 (HY000) at line 7238: The table 'piwik_log_link_visit_action' is full
Many other small tables are copied successfully, but while coping this table it fails with above error. The size of this table is more than 2GB.
Based on different suggestions available on Stack Overflow, I tried each one but nothing worked.
I tried adding 'autoextend' to the
I can't even go inside the overlay directory and keep deleting or freeing the space. Can anyone please help me here.
### my.cnf file
my.cnf
file:
innodb_data_file_path=ibdata1:10M:autoextend
I also tried to increase the tmp_table_size
and heap_table_size
by adding following parameter to the my.cnf
file:
tmp_table_size=2G
max_heap_table_size=2G
Also, I made sure that the server (from where I am running the dump restore command) has enough space (more than 20GB of storage available). But nothing worked.
I tried debugging this more and found that, the docker container where MySQL is running has *overlay* filesystem of size 5GB which starts getting filled and as soon as it fills 100%, I get above error.
Volume mounted on the container is of more than 30GB size. I am not sure from where this *overlay* file system is coming in docker. Overlay is something coming from docker I guess, but not sure where I can increase its size.

[mysqladmin]
user=user1
[mysqld]
skip_name_resolve
explicit_defaults_for_timestamp
basedir=/opt/bitnami/mariadb
port=3306
tmpdir=/opt/bitnami/mariadb/tmp
socket=/opt/bitnami/mariadb/tmp/mysql.sock
pid_file=/opt/bitnami/mariadb/tmp/mysqld.pid
max_allowed_packet=256MB
bind_address=0.0.0.0
log_error=/opt/bitnami/mariadb/logs/mysqld.log
character_set_server=utf8
collation_server=utf8_general_ci
plugin_dir=/opt/bitnami/mariadb/plugin
innodb_data_file_path=ibdata1:10M:autoextend:max:10G
max_heap_table_size=2G
tmp_table_size=2G
[client]
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
default_character_set=UTF8
plugin_dir=/opt/bitnami/mariadb/plugin
[manager]
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
pid_file=/opt/bitnami/mariadb/tmp/mysqld.pid
!include /opt/bitnami/mariadb/conf/bitnami/my_custom.cnf
undefined
(151 rep)
Jul 18, 2020, 11:39 AM
• Last activity: Jun 9, 2025, 11:09 AM
1
votes
0
answers
31
views
Cloud SQL: gcloud sql export sql does not include procedures/triggers — alternatives?
### How to export stored procedures and triggers from Cloud SQL without hitting production? I'm using Google Cloud SQL (MySQL) and want to automate a **nightly/weekly clone** of the production database into staging. The main goal is to keep staging up to date **without impacting prod performance**....
### How to export stored procedures and triggers from Cloud SQL without hitting production?
I'm using Google Cloud SQL (MySQL) and want to automate a **nightly/weekly clone** of the production database into staging. The main goal is to keep staging up to date **without impacting prod performance**.
I’ve been using:
gcloud sql export sql my-instance gs://my-bucket/clone.sql.gz \
--database=mydb \
--offload \
--async
This uses a temporary worker VM (serverless export) and avoids load on the main instance — which is great. However, I found that **stored procedures, triggers, and events are missing** in the dump.
Attempts to add --routines
, --triggers
, or --events
fail with:
ERROR: (gcloud.sql.export.sql) unrecognized arguments: --routines
Apparently, gcloud sql export sql
**doesn't support exporting routines or triggers at all**, and there's no documented way to include them. Yet the export is still billed at $0.01/GB
.
---
### Goal:
Clone a Cloud SQL instance into staging, including:
- Tables and schema
- Data
- Stored procedures / functions
- Triggers
- Events
...without putting load on production.
---
### Options I’ve found:
1. **gcloud sql export sql
**
- ✅ Offloads work (zero prod impact)
- ❌ Skips procedures/triggers/events
- ❌ No way to include them
2. **Direct mysqldump --routines --events --triggers
on prod**
- ✅ Complete dump
- ❌ Impacts prod (not acceptable)
3. **Run mysqldump
on a read-replica**
- ✅ Complete + safe
- ❌ Slightly more setup / cost
---
### Question:
Is using a **read-replica + mysqldump
** the only way to do a full logical export (schema + routines + triggers) **without touching prod**?
Any better alternatives or official GCP-supported approaches?
scr4bble
(111 rep)
Jun 2, 2025, 10:02 PM
1
votes
2
answers
45
views
Unexpected occasional DEADLOCK when re-recreating database from split MySQL dump
To replace the content of a test database from a production database `mysqldump` output the following command was used: cat mysqldump-db-thisdate.sql | mysql -p ... mydb There has never been any issue with this command. However the DB grew a lot and this command takes several minutes. In order to re...
To replace the content of a test database from a production database
mysqldump
output the following command was used:
cat mysqldump-db-thisdate.sql | mysql -p ... mydb
There has never been any issue with this command.
However the DB grew a lot and this command takes several minutes.
In order to reduce this time, a Perl script was written that
- takes the mysqldump output as input
- creates a single file having all DROP TABLE
... CREATE TABLE
for each table
- run this drop-creation file on a single thread, before doing the tables feeding below
- creates as many files (see below) as there are tables (about 100 tables)
- makes a fork()
for each table file that is injected into the DB (all tables are dropped and created + fed in "parallel". table1..100
The DROP-CREATION file is something like
DROP TABLE IF EXISTS mytable1
;
CREATE TABLE mytable1
(
someid1
int NOT NULL,
...
PRIMARY KEY (someid1
)
) ENGINE=InnoDB AUTO_INCREMENT=111 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci;
DROP TABLE IF EXISTS mytable2
;
CREATE TABLE mytable2
(
someid2
int NOT NULL,
...
PRIMARY KEY (someid2
)
) ENGINE=InnoDB AUTO_INCREMENT=222 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci;
...
Each table file mytableI.sql
is like this, for instance for mytable1.sql
/*!40103 SET TIME_ZONE='+00:00' */;
SET foreign_key_checks = 0;
SET unique_checks = 0;
SET autocommit = 0;
START TRANSACTION;
LOCK TABLES mytable1
WRITE;
INSERT INTO mytable1
VALUES (...),(...),...;
UNLOCK TABLES;
COMMIT;
It's like doing, in parallel (pseudo code)
for each table 1 to 100 do
cat mytableI.sql | mysql -p ... mydb /* I is 1 ... 100 */
end for
This method works very well, and saves from 50% to 75% of the time compared to the simple cat whole-dump | mysql
usual method.
However, from time to time (maybe 1 / 10), doing this parallel method, mysql
throws an error
> Deadlock found when trying to get lock; try restarting transaction
It happens rarely, so just restarting the command is not a big deal.
But why? Each table is processed at once, foreign keys are not checked... Doesn't MySQL, thanks to "LOCK TABLES" (and other mechanisms) protect itself against deadlocks in this case?
*Addendum*: The mydb test database is not being accesses otherwise.
***edit testing other methods***
Trying to perform the DROP / CREATE operations in parallel, (each DROP / CREATE in the same thread, for each table), not even filling the tables with data, plenty of Deadlocks occur...
Could it be that MySQL does not handle very well DROP/CREATE operations performed simultaneously? (should be done by a single DB admin?)
Note:
"*simultaneously*" and "*in parallel*" meaning each thread has its own MySQL connection.
Déjà vu
(555 rep)
May 29, 2025, 07:29 AM
• Last activity: May 31, 2025, 12:15 PM
0
votes
1
answers
403
views
Powershell script while executing mysqldump script
I'm having an error 1045: Access denied for user 'mysqlsuperuser'@'test.internal.cloudapp.net' (using password: YES) when trying to connect Here is the mysqldump command on my ps1 file. $command = [string]::format("`"{0}`" -u {1} -p{2} -h {3} --default-character-set=utf8mb4 --quick --master-data=2 -...
I'm having an error 1045: Access denied for user 'mysqlsuperuser'@'test.internal.cloudapp.net' (using password: YES) when trying to connect
Here is the mysqldump command on my ps1 file.
$command = [string]::format("
"{0}
" -u {1} -p{2} -h {3} --default-character-set=utf8mb4 --quick --master-data=2 --single-transaction --routines --databases test1 --add-drop-database --result-file="{5}
" ",
$mysqlDumpLocation,
$databaseUsername,
$databasePassword,
$databaseIp,
$database.DatabaseName,
$saveFilePath);
But when I try to remove the {2} which is the DB password, then execute the ps1 file. When it prompt me for the password, it goes through.
JRA
(137 rep)
Jan 27, 2021, 10:57 AM
• Last activity: May 31, 2025, 10:03 AM
0
votes
1
answers
248
views
All recent changes in MySQL have been automatically removed. Recently INSERTed or UPDATEd data is not available
I have been using MySQL for the last two months. I'm regularly `INSERT`ing new records and `UPDATE`ing old ones. But when I opened phpMyAdmin today, all of the changes which I've made in the 10 days have vanished. `INSERT`ed data for the last 10 days is unavailable and The `UPDATE`s I have made to o...
I have been using MySQL for the last two months. I'm regularly
INSERT
ing new records and UPDATE
ing old ones. But when I opened phpMyAdmin today, all of the changes which I've made in the 10 days have vanished.
INSERT
ed data for the last 10 days is unavailable and The UPDATE
s I have made to other records have also reverted to previous versions.
The AUTO_INCREMENT
field is still incrementing to the next number like nothing is DELETE
d.
I need my recently INSERT
ed data and my UPDATE
s restored.
More importantly perhaps, **why** is this happening?
This problem happen between 5th to 7th January. This is the Error file of these days:
Error Log
Rakibul Islam
(101 rep)
Jan 8, 2021, 03:00 AM
• Last activity: May 22, 2025, 07:05 AM
0
votes
1
answers
260
views
How to backup a replica Using mysqldump?
From [MySQL Documents][1], and [This answer][2] they said: > When using mysqldump, you should stop replication on the replica before starting the dump process to ensure that the dump contains a consistent set of data: Since my DBs are big, I thought to use `--single-transaction --quick` to take a sn...
From MySQL Documents , and This answer they said:
> When using mysqldump, you should stop replication on the replica before starting the dump process to ensure that the dump contains a consistent set of data:
Since my DBs are big, I thought to use
--single-transaction --quick
to take a snapshot of my DBs and continue to receive data from the master.
My question is, what will happen if I do not stop the replica
and receive new data during backup time?
**Note**: I know --single-transaction
take a snapshot, But I want to know if my approach has any disadvantages.
N_Z
(248 rep)
Feb 20, 2023, 09:56 AM
• Last activity: May 20, 2025, 01:01 PM
1
votes
2
answers
252
views
Grabbing SQL Dump of Master DB Without Downtime
I'm curious whether downtime will be necessary to grab a SQL dump of my master database. Right now, I'm in the process of rebuilding my one slave. There is actually only one database from master that is being replicated onto slave. All tables in that database are InnoDB. This is the command I want t...
I'm curious whether downtime will be necessary to grab a SQL dump of my master database.
Right now, I'm in the process of rebuilding my one slave. There is actually only one database from master that is being replicated onto slave. All tables in that database are InnoDB. This is the command I want to run:
mysqldump --master-data --single-transaction --hex-blob dbname | gzip > dbname.sql.gz
I'm running MySQL 5.1 and here is a redacted version of my my.cnf file:
[mysqld]
default-storage-engine=InnoDB
character-set-server=UTF8
lower_case_table_names=1
transaction_isolation=READ-COMMITTED
wait_timeout=86400
interactive_timeout=3600
delayed_insert_timeout=10000
connect_timeout=100000
max_connections=750
max_connect_errors=1000
back_log=50
max_allowed_packet=1G
max_heap_table_size=64M
tmp_table_size=64M
bulk_insert_buffer_size=128M
innodb_buffer_pool_size=10000M
innodb_data_file_path=ibdata1:256M:autoextend
innodb_file_per_table=1
innodb_additional_mem_pool_size=32M
innodb_log_file_size=1G
innodb_log_buffer_size=8M
innodb_flush_method=O_DIRECT
innodb_lock_wait_timeout=240
innodb_flush_log_at_trx_commit=2
innodb_open_files=8192
innodb_support_xa=ON
thread_cache_size=500
expire_logs_days=2
server-id=1
log_bin=1
binlog_format=MIXED
sync_binlog=0
[mysqldump]
max_allowed_packet=128M
Am I good without downtime or not? I'm concerned about a possible read lock being placed on tables.
Jordan Parra
(19 rep)
Jul 14, 2016, 01:12 AM
• Last activity: May 20, 2025, 12:06 AM
Showing page 1 of 20 total questions