Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
528 views
Insert into table select - Replication lag - Percona Server 5.6
I have two MySQL instances (Percona server 5.6.31) in Master-Slave replication setup. I have set the below configuration: 1. ROW based replication is set. 2. Transaction Isolation is set to read-committed. Today, there was a insert going on in my Master. It was in the format INSERT INTO table1 SELEC...
I have two MySQL instances (Percona server 5.6.31) in Master-Slave replication setup. I have set the below configuration: 1. ROW based replication is set. 2. Transaction Isolation is set to read-committed. Today, there was a insert going on in my Master. It was in the format INSERT INTO table1 SELECT * FROM table2 Table 2 has 200 million rows. Though the number of insert records was only 5000 but the operation lasted for 30 mins. I observed replication lag during the insert operation. I have load infile disabled due to security concerns. Hence I can't insert using that as well. I went this article from Percona which says that this can be resolved if txn isolation is used as ROW and versions above 5.1 that this is resolved. 1. In what way I can make my slave to be in sync with Master in such conditions? 2. Why does the slave lag here?
tesla747 (1910 rep)
Dec 28, 2016, 04:08 PM • Last activity: Aug 6, 2025, 12:02 AM
0 votes
1 answers
1355 views
InnoDB Buffer causing a lot of deadlocks and connection/client timeouts
I have a PXC cluster (5.7) with three nodes running on CentOS 7 VMs, each with the following system info: mysql_db# free total used free shared buff/cache available Mem: 8009248 2591576 358504 146040 5059168 4875716 Swap: 5242876 3592 5239284 mysql_db# free -g total used free shared buff/cache avail...
I have a PXC cluster (5.7) with three nodes running on CentOS 7 VMs, each with the following system info: mysql_db# free total used free shared buff/cache available Mem: 8009248 2591576 358504 146040 5059168 4875716 Swap: 5242876 3592 5239284 mysql_db# free -g total used free shared buff/cache available Mem: 7 2 0 0 4 4 Swap: 4 0 4 mysql_db# nproc 8 mysql_db# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/cl-root 44G 20G 25G 46% / devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs 3.9G 146M 3.7G 4% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/sda1 1014M 272M 743M 27% /boot tmpfs 783M 0 783M 0% /run/user/1318214225 tmpfs 783M 0 783M 0% /run/user/0 mysql_db# top top - 07:39:38 up 50 days, 4:54, 2 users, load average: 0.46, 0.61, 0.61 Tasks: 227 total, 1 running, 226 sleeping, 0 stopped, 0 zombie %Cpu(s): 20.3 us, 4.0 sy, 0.0 ni, 75.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 8009248 total, 357176 free, 2592348 used, 5059724 buff/cache KiB Swap: 5242876 total, 5239284 free, 3592 used. 4874700 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1574 mysql 20 0 9241472 2.3g 238932 S 104.8 30.1 6343:02 mysqld 3988 root 20 0 172268 2348 1580 R 19.0 0.0 0:00.12 top 13164 root 20 0 177264 94680 4676 S 19.0 1.2 2693:42 mysqld_exporter 1 root 20 0 191404 4420 2612 S 0.0 0.1 187:50.00 systemd mysql_db# iostat Linux 3.10.0-862.14.4.el7.x86_64 (dev_mysql_02.pd.local) 11/20/2018 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 2.35 0.00 0.36 0.05 0.00 97.24 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 3.83 153.19 153.17 664744979 664666279 dm-0 3.74 139.26 153.17 604320443 664647691 dm-1 0.00 0.00 0.00 2236 2460 mysql_db#lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 21 Model: 1 Model name: AMD Opteron(TM) Processor 6276 Stepping: 2 CPU MHz: 2300.028 BogoMIPS: 4600.05 Hypervisor vendor: VMware Virtualization type: full L1d cache: 16K L1i cache: 64K L2 cache: 2048K L3 cache: 12288K NUMA node0 CPU(s): 0-7 Below is some important InnoDB variables in the my.cnf config file: ... back_log = 65535 binlog_format = ROW character_set_server = utf8 collation_server = utf8_general_ci default_storage_engine = InnoDB enforce-gtid-consistency = 1 expand_fast_index_creation = 1 gtid_mode = ON innodb_autoinc_lock_mode = 2 innodb_buffer_pool_instances = 3 innodb_buffer_pool_size = 3G innodb_data_file_path = ibdata1:64M;ibdata2:64M:autoextend innodb_file_format = Barracuda innodb_file_per_table innodb_flush_log_at_trx_commit = 2 innodb_flush_method = O_DIRECT innodb_io_capacity = 1600 innodb_large_prefix innodb_log_file_size = 256M innodb_print_all_deadlocks = 1 innodb_read_io_threads = 64 innodb_stats_on_metadata = FALSE innodb_write_io_threads = 64 long_query_time = 1 log_bin_trust_function_creators = 1 master_info_repository = TABLE max_allowed_packet = 64M max_connect_errors = 4294967295 max_connections = 2600 max_user_connections = 2500 min_examined_row_limit = 1000 relay_log_info_repository = TABLE relay-log-recovery = TRUE skip-name-resolve slave_parallel_workers = 8 slow_query_log = 1 #slow_query_log_timestamp_always = 1 #thread_cache = 1024 tmpdir = /srv/tmp transaction_isolation = READ-COMMITTED updatable_views_with_limit = 0 user = mysql wait_timeout = 60 userstat table_open_cache = 4096 innodb_open_files = 10240 open_files_limit = 10240 connect_timeout=60 thread_cache_size = 4096 sql_mode = NO_ENGINE_SUBSTITUTION query_cache_size = 0 slave_pending_jobs_size_max=32M range_optimizer_max_mem_size=8M log_timestamps=SYSTEM server-id= 172162 #use IP userstat=1 ... As seen above, i have set innodb_buffer_pool_size to 3GB, but not sure if this is a good value given the system info provided above, especially that Total RAM is 7.6GB, with about 5GB allocated to SWAP. We have noticed in the mysql error logs a lot of timeouts and deadlocks frequently happening, and running SHOW ENGINE INNODB STATUS shows this: Status: ===================================== 2018-11-20 07:46:14 0x7f6acc0f9700 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 8 seconds ----------------- BACKGROUND THREAD ----------------- srv_master_thread loops: 201141 srv_active, 0 srv_shutdown, 2836117 srv_idle srv_master_thread log flush and writes: 3037214 ---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 32658 OS WAIT ARRAY INFO: signal count 33393 RW-shared spins 0, rounds 156414, OS waits 27395 RW-excl spins 0, rounds 222081, OS waits 1171 RW-sx spins 11313, rounds 167795, OS waits 1583 Spin rounds per wait: 156414.00 RW-shared, 222081.00 RW-excl, 14.83 RW-sx ------------------------ LATEST FOREIGN KEY ERROR ------------------------ 2018-11-20 07:06:51 0x7f6aad2eb700 Transaction: TRANSACTION 25842563, ACTIVE 1 sec updating or deleting mysql tables in use 1, locked 1 12 lock struct(s), heap size 1136, 6 row lock(s), undo log entries 3 MySQL thread id 577793, OS thread handle 140096148780800, query id 3550710 10.168.103.11 slashire_dev update INSERT INTO member_bank_acct (ba_member_id,ba_create_user,ba_update_user,ba_country,ba_name,ba_currency,ba_bank_name,ba_bank_acct_num,ba_swift_code,ba_city,ba_branch,ba_iban,ba_beneficiary_bank) VALUES ('00000000000000000252','SYSTEM','SYSTEM','','','','','','','','','','') ON DUPLICATE KEY UPDATE ba_country = '',ba_name = '',ba_currency = '',ba_bank_name = '',ba_bank_acct_num = '',ba_swift_code = '',ba_city = '',ba_branch = '',ba_iban = '',ba_beneficiary_bank = '' Foreign key constraint fails for table slashire_dev.member_bank_acct: , CONSTRAINT member_bank_acct_fk02 FOREIGN KEY (ba_country) REFERENCES country (c_code) Trying to add in child table, in index member_bank_acct_fk02 tuple: DATA TUPLE: 2 fields; 0: len 0; hex ; asc ;; 1: len 20; hex 3030303030303030303030303030303030323532; asc 00000000000000000252;; But in parent table slashire_dev.country, in index PRIMARY, the closest match we can find is record: PHYSICAL RECORD: n_fields 9; compact format; info bits 0 0: len 2; hex 3030; asc 00;; 1: len 6; hex 00000000566c; asc Vl;; 2: len 7; hex b0000001eb0110; asc ;; 3: len 3; hex 416c6c; asc All;; 4: len 2; hex 8000; asc ;; 5: len 5; hex 999f0c0000; asc ;; 6: len 6; hex 53595354454d; asc SYSTEM;; 7: len 5; hex 999f0c0000; asc ;; 8: len 6; hex 53595354454d; asc SYSTEM;; ------------ TRANSACTIONS ------------ Trx id counter 25989367 Purge done for trx's n:o < 25989079 undo n:o < 0 state: running but idle History list length 27 LIST OF TRANSACTIONS FOR EACH SESSION: ---TRANSACTION 421576625901616, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625897008, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625900464, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625899312, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625895856, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625898160, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625890096, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625893552, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625888944, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625867056, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625894704, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625892400, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625891248, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625886640, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625887792, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625885488, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625884336, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625883184, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625882032, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625880880, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625879728, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625878576, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625877424, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625876272, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625875120, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625873968, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625872816, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625871664, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625870512, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625869360, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421576625868208, not started 0 lock struct(s), heap size 1136, 0 row lock(s) -------- FILE I/O -------- I/O thread 0 state: waiting for completed aio requests (insert buffer thread) I/O thread 1 state: waiting for completed aio requests (log thread) I/O thread 2 state: waiting for completed aio requests (read thread) I/O thread 3 state: waiting for completed aio requests (read thread) I/O thread 4 state: waiting for completed aio requests (read thread) I/O thread 5 state: waiting for completed aio requests (read thread) I/O thread 6 state: waiting for completed aio requests (read thread) I/O thread 7 state: waiting for completed aio requests (read thread) I/O thread 8 state: waiting for completed aio requests (read thread) I/O thread 9 state: waiting for completed aio requests (read thread) I/O thread 10 state: waiting for completed aio requests (read thread) I/O thread 11 state: waiting for completed aio requests (read thread) I/O thread 12 state: waiting for completed aio requests (read thread) I/O thread 13 state: waiting for completed aio requests (read thread) I/O thread 14 state: waiting for completed aio requests (read thread) I/O thread 15 state: waiting for completed aio requests (read thread) I/O thread 16 state: waiting for completed aio requests (read thread) I/O thread 17 state: waiting for completed aio requests (read thread) I/O thread 18 state: waiting for completed aio requests (read thread) I/O thread 19 state: waiting for completed aio requests (read thread) I/O thread 20 state: waiting for completed aio requests (read thread) I/O thread 21 state: waiting for completed aio requests (read thread) I/O thread 22 state: waiting for completed aio requests (read thread) I/O thread 23 state: waiting for completed aio requests (read thread) I/O thread 24 state: waiting for completed aio requests (read thread) I/O thread 25 state: waiting for completed aio requests (read thread) I/O thread 26 state: waiting for completed aio requests (read thread) I/O thread 27 state: waiting for completed aio requests (read thread) I/O thread 28 state: waiting for completed aio requests (read thread) I/O thread 29 state: waiting for completed aio requests (read thread) I/O thread 30 state: waiting for completed aio requests (read thread) I/O thread 31 state: waiting for completed aio requests (read thread) I/O thread 32 state: waiting for completed aio requests (read thread) I/O thread 33 state: waiting for completed aio requests (read thread) I/O thread 34 state: waiting for completed aio requests (read thread) I/O thread 35 state: waiting for completed aio requests (read thread) I/O thread 36 state: waiting for completed aio requests (read thread) I/O thread 37 state: waiting for completed aio requests (read thread) I/O thread 38 state: waiting for completed aio requests (read thread) I/O thread 39 state: waiting for completed aio requests (read thread) I/O thread 40 state: waiting for completed aio requests (read thread) I/O thread 41 state: waiting for completed aio requests (read thread) I/O thread 42 state: waiting for completed aio requests (read thread) I/O thread 43 state: waiting for completed aio requests (read thread) I/O thread 44 state: waiting for completed aio requests (read thread) I/O thread 45 state: waiting for completed aio requests (read thread) I/O thread 46 state: waiting for completed aio requests (read thread) I/O thread 47 state: waiting for completed aio requests (read thread) I/O thread 48 state: waiting for completed aio requests (read thread) I/O thread 49 state: waiting for completed aio requests (read thread) I/O thread 50 state: waiting for completed aio requests (read thread) I/O thread 51 state: waiting for completed aio requests (read thread) I/O thread 52 state: waiting for completed aio requests (read thread) I/O thread 53 state: waiting for completed aio requests (read thread) I/O thread 54 state: waiting for completed aio requests (read thread) I/O thread 55 state: waiting for completed aio requests (read thread) I/O thread 56 state: waiting for completed aio requests (read thread) I/O thread 57 state: waiting for completed aio requests (read thread) I/O thread 58 state: waiting for completed aio requests (read thread) I/O thread 59 state: waiting for completed aio requests (read thread) I/O thread 60 state: waiting for completed aio requests (read thread) I/O thread 61 state: waiting for completed aio requests (read thread) I/O thread 62 state: waiting for completed aio requests (read thread) I/O thread 63 state: waiting for completed aio requests (read thread) I/O thread 64 state: waiting for completed aio requests (read thread) I/O thread 65 state: waiting for completed aio requests (read thread) I/O thread 66 state: waiting for completed aio requests (write thread) I/O thread 67 state: waiting for completed aio requests (write thread) I/O thread 68 state: waiting for completed aio requests (write thread) I/O thread 69 state: waiting for completed aio requests (write thread) I/O thread 70 state: waiting for completed aio requests (write thread) I/O thread 71 state: waiting for completed aio requests (write thread) I/O thread 72 state: waiting for completed aio requests (write thread) I/O thread 73 state: waiting for completed aio requests (write thread) I/O thread 74 state: waiting for completed aio requests (write thread) I/O thread 75 state: waiting for completed aio requests (write thread) I/O thread 76 state: waiting for completed aio requests (write thread) I/O thread 77 state: waiting for completed aio requests (write thread) I/O thread 78 state: waiting for completed aio requests (write thread) I/O thread 79 state: waiting for completed aio requests (write thread) I/O thread 80 state: waiting for completed aio requests (write thread) I/O thread 81 state: waiting for completed aio requests (write thread) I/O thread 82 state: waiting for completed aio requests (write thread) I/O thread 83 state: waiting for completed aio requests (write thread) I/O thread 84 state: waiting for completed aio requests (write thread) I/O thread 85 state: waiting for completed aio requests (write thread) I/O thread 86 state: waiting for completed aio requests (write thread) I/O thread 87 state: waiting for completed aio requests (write thread) I/O thread 88 state: waiting for completed aio requests (write thread) I/O thread 89 state: waiting for completed aio requests (write thread) I/O thread 90 state: waiting for completed aio requests (write thread) I/O thread 91 state: waiting for completed aio requests (write thread) I/O thread 92 state: waiting for completed aio requests (write thread) I/O thread 93 state: waiting for completed aio requests (write thread) I/O thread 94 state: waiting for completed aio requests (write thread) I/O thread 95 state: waiting for completed aio requests (write thread) I/O thread 96 state: waiting for completed aio requests (write thread) I/O thread 97 state: waiting for completed aio requests (write thread) I/O thread 98 state: waiting for completed aio requests (write thread) I/O thread 99 state: waiting for completed aio requests (write thread) I/O thread 100 state: waiting for completed aio requests (write thread) I/O thread 101 state: waiting for completed aio requests (write thread) I/O thread 102 state: waiting for completed aio requests (write thread) I/O thread 103 state: waiting for completed aio requests (write thread) I/O thread 104 state: waiting for completed aio requests (write thread) I/O thread 105 state: waiting for completed aio requests (write thread) I/O thread 106 state: waiting for completed aio requests (write thread) I/O thread 107 state: waiting for completed aio requests (write thread) I/O thread 108 state: waiting for completed aio requests (write thread) I/O thread 109 state: waiting for completed aio requests (write thread) I/O thread 110 state: waiting for completed aio requests (write thread) I/O thread 111 state: waiting for completed aio requests (write thread) I/O thread 112 state: waiting for completed aio requests (write thread) I/O thread 113 state: waiting for completed aio requests (write thread) I/O thread 114 state: waiting for completed aio requests (write thread) I/O thread 115 state: waiting for completed aio requests (write thread) I/O thread 116 state: waiting for completed aio requests (write thread) I/O thread 117 state: waiting for completed aio requests (write thread) I/O thread 118 state: waiting for completed aio requests (write thread) I/O thread 119 state: waiting for completed aio requests (write thread) I/O thread 120 state: waiting for completed aio requests (write thread) I/O thread 121 state: waiting for completed aio requests (write thread) I/O thread 122 state: waiting for completed aio requests (write thread) I/O thread 123 state: waiting for completed aio requests (write thread) I/O thread 124 state: waiting for completed aio requests (write thread) I/O thread 125 state: waiting for completed aio requests (write thread) I/O thread 126 state: waiting for completed aio requests (write thread) I/O thread 127 state: waiting for completed aio requests (write thread) I/O thread 128 state: waiting for completed aio requests (write thread) I/O thread 129 state: waiting for completed aio requests (write thread) Pending normal aio reads: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] , aio writes: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] , ibuf aio reads:, log i/o's:, sync i/o's: Pending flushes (fsync) log: 0; buffer pool: 0 8353 OS file reads, 2256885 OS file writes, 649203 OS fsyncs 0.00 reads/s, 0 avg bytes/read, 6.37 writes/s, 1.75 fsyncs/s ------------------------------------- INSERT BUFFER AND ADAPTIVE HASH INDEX ------------------------------------- Ibuf: size 1, free list len 0, seg size 2, 108 merges merged operations: insert 1, delete mark 1, delete 0 discarded operations: insert 0, delete mark 0, delete 0 Hash table size 796871, node heap has 46 buffer(s) Hash table size 796871, node heap has 8 buffer(s) Hash table size 796871, node heap has 7 buffer(s) Hash table size 796871, node heap has 14 buffer(s) Hash table size 796871, node heap has 23 buffer(s) Hash table size 796871, node heap has 16 buffer(s) Hash table size 796871, node heap has 15 buffer(s) Hash table size 796871, node heap has 94 buffer(s) 234.22 hash searches/s, 8.87 non-hash searches/s --- LOG --- Log sequence number 1427669147 Log flushed up to 1427669147 Pages flushed up to 1427669147 Last checkpoint at 1427669138 Max checkpoint age 434154333 Checkpoint age target 420587011 Modified age 0 Checkpoint age 9 0 pending log flushes, 0 pending chkp writes 452361 log i/o's done, 1.00 log i/o's/second ---------------------- BUFFER POOL AND MEMORY ---------------------- Total large memory allocated 3353346048 Dictionary memory allocated 11634313 Internal hash tables (constant factor + variable factor) Adaptive hash index 54687040 (50999744 + 3687296) Page hash 1107112 (buffer pool 0 only) Dictionary cache 24384249 (12749936 + 11634313) File system 1676640 (812272 + 864368) Lock system 8004712 (7969496 + 35216) Recovery system 0 (0 + 0) Buffer pool size 196584 Buffer pool size, bytes 3220832256 Free buffers 146566 Database pages 49795 Old database pages 18321 Modified db pages 0 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 10000, not young 35358 0.00 youngs/s, 0.00 non-youngs/s Pages read 7697, created 42280, written 1645380 0.00 reads/s, 0.00 creates/s, 5.00 writes/s Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000 Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s LRU len: 49795, unzip_LRU len: 0 I/O sum:cur, unzip sum:cur ---------------------- INDIVIDUAL BUFFER POOL INFO ---------------------- ---BUFFER POOL 0 Buffer pool size 65528 Buffer pool size, bytes 1073610752 Free buffers 48863 Database pages 16588 Old database pages 6103 Modified db pages 0 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 3655, not young 13239 0.00 youngs/s, 0.00 non-youngs/s Pages read 2611, created 14049, written 334207 0.00 reads/s, 0.00 creates/s, 0.50 writes/s Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000 Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s LRU len: 16588, unzip_LRU len: 0 I/O sum:cur, unzip sum:cur ---BUFFER POOL 1 Buffer pool size 65528 Buffer pool size, bytes 1073610752 Free buffers 48627 Database pages 16823 Old database pages 6190 Modified db pages 0 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 3253, not young 12647 0.00 youngs/s, 0.00 non-youngs/s Pages read 2476, created 14408, written 1147163 0.00 reads/s, 0.00 creates/s, 3.87 writes/s Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000 Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s LRU len: 16823, unzip_LRU len: 0 I/O sum:cur, unzip sum:cur ---BUFFER POOL 2 Buffer pool size 65528 Buffer pool size, bytes 1073610752 Free buffers 49076 Database pages 16384 Old database pages 6028 Modified db pages 0 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 3092, not young 9472 0.00 youngs/s, 0.00 non-youngs/s Pages read 2610, created 13823, written 164010 0.00 reads/s, 0.00 creates/s, 0.62 writes/s Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000 Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s LRU len: 16384, unzip_LRU len: 0 I/O sum:cur, unzip sum:cur -------------- ROW OPERATIONS -------------- 0 queries inside InnoDB, 0 queries in queue 0 read views open inside InnoDB 0 RW transactions active inside InnoDB Process ID=1574, Main thread ID=140097892615936, state: sleeping Number of rows inserted 25413524, updated 63914, deleted 6172, read 1035018804 65.87 inserts/s, 0.12 updates/s, 0.00 deletes/s, 97.36 reads/s ---------------------------- END OF INNODB MONITOR OUTPUT Are there any tell tell signs from the above info causing these timeouts and deadlocks? How can i find out the percentage of utilized InnoDB buffer pool against total RAM? # Ping Results 64 bytes from 10.1.5.100: icmp_seq=1 ttl=64 time=0.958 ms 64 bytes from 10.1.5.100: icmp_seq=2 ttl=64 time=1.09 ms 64 bytes from 10.1.5.100: icmp_seq=4 ttl=64 time=1.22 ms 64 bytes from 10.1.5.100: icmp_seq=5 ttl=64 time=2.24 ms DEADLOCK INFO *** Priority TRANSACTION: TRANSACTION 165015546, ACTIVE 0 sec starting index read mysql tables in use 1, locked 1 MySQL thread id 14, OS thread handle 140101363758848, query id 11358891 System lock *** Victim TRANSACTION: TRANSACTION 165015100, ACTIVE 1 sec , undo log entries 1 MySQL thread id 1027074, OS thread handle 140096133596928, query id 11358888 10.168.103.11 membership wsrep: initiating replication for write set (-1) COMMIT *** WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 2481 page no 17 n bits 80 index PRIMARY of table membership.member_token trx id 165015100 lock_mode X locks rec but not gap 2018-12-06T03:12:23.732058-00:00 14 [Note] WSREP: --------- CONFLICT DETECTED -------- 2018-12-06T03:12:23.732119-00:00 14 [Note] WSREP: cluster conflict due to high priority abort for threads: 2018-12-06T03:12:23.732331-00:00 14 [Note] WSREP: Winning thread: THD: 14, mode: applier, state: executing, conflict: no conflict, seqno: 152121 SQL: (null) 2018-12-06T03:12:23.732392-00:00 14 [Note] WSREP: Victim thread: THD: 1027074, mode: local, state: committing, conflict: no conflict, seqno: -1 SQL: COMMIT
The Georgia (343 rep)
Nov 20, 2018, 07:49 AM • Last activity: Aug 4, 2025, 01:00 PM
5 votes
1 answers
1521 views
Full database backup using xtrabackup stream
I'm new to Percona XtraBackup. I've been trying to perform a full backup stream from my local ( *test server around 600GB* ) to remote server. I just have some questions and I need guides, and I think this is the best place. I have this command, which I executed in my local `innobackupex --user=user...
I'm new to Percona XtraBackup. I've been trying to perform a full backup stream from my local ( *test server around 600GB* ) to remote server. I just have some questions and I need guides, and I think this is the best place. I have this command, which I executed in my local innobackupex --user=user --password=password --stream=tar /which/directory/ | pigz | ssh user@10.11.12.13 "cat - > /mybackup/backup.tar.gz" My questions are : - **My log scan is not changing / increasing** >> log scanned up to (270477048535) >> log scanned up to (270477048535) >> log scanned up to (270477048535) >> log scanned up to (270477048535) >> log scanned up to (270477048535) I've read a comment before and someone says log scan will not increase if no one is using the database. ( Yes, no one is using the database )  - **It's been running for a while.** I've tried to use xtrabackup to a local test server with around 1.7TB and finished in just a few hours. Is this because I'm using stream that's why it is slow? What is the purpose of "/which/directory/" in my command? Is it going to store the file in /which/directory/ first and then transfer to my remote server ? Why do I have to specify a directory? - **No created file on my local server /which/directory/ and to my remote server /mybackup/.** Am I doing something wrong ? Is there a much easier way to perform this? My only goal is to backup my local database to a remote server, I'm doing this stream because I don't have enough disk space to store my backup locally. I'm using MariaDB 5.5 and Percona XtraBackup 2.2
Thantanenorn (51 rep)
Mar 27, 2018, 01:45 AM • Last activity: Jul 31, 2025, 03:05 PM
2 votes
1 answers
2284 views
Percona mysql xtradb cluster doesn't start properly and node restarts don't work
**tl;dr** When starting a fresh percona cluster of 3 kubernetes pods, the `grastate.dat` `seq_no` is set at `-1` and doesn't change. On deleting one pod and watching it restart, expecting it to rejoin the cluster, it sets it's inital position to `00000000-0000-0000-0000-000000000000:-1` and tries to...
**tl;dr** When starting a fresh percona cluster of 3 kubernetes pods, the grastate.dat seq_no is set at -1 and doesn't change. On deleting one pod and watching it restart, expecting it to rejoin the cluster, it sets it's inital position to 00000000-0000-0000-0000-000000000000:-1 and tries to connect to itself (it's former ip), maybe because it'd been the first pod in the cluster? It then timeouts in it's erroneous connection to itself: 2017-03-26T08:38:05.374058Z 0 [Note] WSREP: (b7571ff8, 'tcp://0.0.0.0:4567') connection to peer 00000000 with addr tcp://10.52.0.26:4567 timed out, no messages seen in PT3S **The cluster doesn't get started properly and I'm unable to successfully restart pods in the cluster.** **Full** When I start the cluster from scratch. With blank data directories and a fresh etcd cluster, everything seems to come up. However I look at the grastate.dat and I find that the seq_no for each pod is -1: root@gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-0/grastate.dat # GALERA saved state version: 2.1 uuid: a91f70f2-11f8-11e7-8f3d-86c2e58790ac seqno: -1 safe_to_bootstrap: 0 root@gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-1/grastate.dat # GALERA saved state version: 2.1 uuid: a91f70f2-11f8-11e7-8f3d-86c2e58790ac seqno: -1 safe_to_bootstrap: 0 root@gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-2/grastate.dat # GALERA saved state version: 2.1 uuid: a91f70f2-11f8-11e7-8f3d-86c2e58790ac seqno: -1 safe_to_bootstrap: 0 At this point I can do mysql -h percona -u wordpress -p and connect and wordpress works too. Scenario: I have 3 percona pods / # jonathan@ubuntu:~/Projects/k8wp$ kubectl get pods NAME READY STATUS RESTARTS AGE etcd-0 1/1 Running 1 12h etcd-1 1/1 Running 0 12h etcd-2 1/1 Running 3 12h etcd-3 1/1 Running 1 12h percona-0 1/1 Running 0 8m percona-1 1/1 Running 0 57m percona-2 1/1 Running 0 57m When I try to restart percona-0 it gets kicked out of the cluster on restarting, percona-0's gvwstate.dat file shows root@gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-0/gvwstate.dat my_uuid: b7571ff8-11f8-11e7-bd2d-8b50487e1523 #vwbeg view_id: 3 b7571ff8-11f8-11e7-bd2d-8b50487e1523 3 bootstrap: 0 member: b7571ff8-11f8-11e7-bd2d-8b50487e1523 0 member: bd05a643-11f8-11e7-9dab-1b4fc20eaf6a 0 member: c33d6a73-11f8-11e7-9e86-fe1cf3d3367a 0 #vwend The other 2 pods in the cluster show: root@gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-1/gvwstate.dat my_uuid: bd05a643-11f8-11e7-9dab-1b4fc20eaf6a #vwbeg view_id: 3 bd05a643-11f8-11e7-9dab-1b4fc20eaf6a 4 bootstrap: 0 member: bd05a643-11f8-11e7-9dab-1b4fc20eaf6a 0 member: c33d6a73-11f8-11e7-9e86-fe1cf3d3367a 0 #vwend root@gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-2/gvwstate.dat my_uuid: c33d6a73-11f8-11e7-9e86-fe1cf3d3367a #vwbeg view_id: 3 bd05a643-11f8-11e7-9dab-1b4fc20eaf6a 4 bootstrap: 0 member: bd05a643-11f8-11e7-9dab-1b4fc20eaf6a 0 member: c33d6a73-11f8-11e7-9e86-fe1cf3d3367a 0 #vwend Here are what I think are the relevant errors from percona-0's startup: 2017-03-26T08:37:58.370605Z 0 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1 2017-03-26T08:37:58.372537Z 0 [Note] WSREP: gcomm: connecting to group 'wordpress-001', peer '10.52.0.26:' 2017-03-26T08:38:01.373345Z 0 [Note] WSREP: (b7571ff8, 'tcp://0.0.0.0:4567') connection to peer 00000000 with addr tcp://10.52.0.26:4567 timed out, no messages seen in PT3S 2017-03-26T08:38:01.373682Z 0 [Warning] WSREP: no nodes coming from prim view, prim not possible 2017-03-26T08:38:01.373750Z 0 [Note] WSREP: view(view_id(NON_PRIM,b7571ff8,5) memb { b7571ff8,0 } joined { } left { } partitioned { }) 2017-03-26T08:38:01.373838Z 0 [Note] WSREP: gcomm: connected 2017-03-26T08:38:01.373872Z 0 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636 2017-03-26T08:38:01.373987Z 0 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0) 2017-03-26T08:38:01.374012Z 0 [Note] WSREP: Opened channel 'wordpress-001' 2017-03-26T08:38:01.374108Z 0 [Note] WSREP: Waiting for SST to complete. 2017-03-26T08:38:01.374417Z 0 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1 2017-03-26T08:38:01.374469Z 0 [Note] WSREP: Flow-control interval: [16, 16] 2017-03-26T08:38:01.374491Z 0 [Note] WSREP: Received NON-PRIMARY. 2017-03-26T08:38:01.374560Z 1 [Note] WSREP: New cluster view: global state: :-1, view# -1: non-Primary, number of nodes: 1, my index: 0, protocol version -1 The ip it's trying to connect to 10.52.0.26 in 2017-03-26T08:37:58.372537Z 0 [Note] WSREP: gcomm: connecting to group 'wordpress-001', peer '10.52.0.26:' is actually that pods previous ip, here's the listing of keys in etcd I did before deleting percona-0 / # etcdctl ls --recursive /pxc-cluster /pxc-cluster/wordpress /pxc-cluster/queue /pxc-cluster/queue/wordpress /pxc-cluster/queue/wordpress-001 /pxc-cluster/wordpress-001 /pxc-cluster/wordpress-001/10.52.1.46 /pxc-cluster/wordpress-001/10.52.1.46/ipaddr /pxc-cluster/wordpress-001/10.52.1.46/hostname /pxc-cluster/wordpress-001/10.52.2.33 /pxc-cluster/wordpress-001/10.52.2.33/ipaddr /pxc-cluster/wordpress-001/10.52.2.33/hostname /pxc-cluster/wordpress-001/10.52.0.26 /pxc-cluster/wordpress-001/10.52.0.26/hostname /pxc-cluster/wordpress-001/10.52.0.26/ipaddr After kubectl delete pods/percona-0: / # etcdctl ls --recursive /pxc-cluster /pxc-cluster/queue /pxc-cluster/queue/wordpress /pxc-cluster/queue/wordpress-001 /pxc-cluster/wordpress-001 /pxc-cluster/wordpress-001/10.52.1.46 /pxc-cluster/wordpress-001/10.52.1.46/ipaddr /pxc-cluster/wordpress-001/10.52.1.46/hostname /pxc-cluster/wordpress-001/10.52.2.33 /pxc-cluster/wordpress-001/10.52.2.33/ipaddr /pxc-cluster/wordpress-001/10.52.2.33/hostname /pxc-cluster/wordpress Also during the restart percona-0 tried to register to etcd with: {"action":"create","node":{"key":"/pxc-cluster/queue/wordpress-001/00000000000000009886","value":"10.52.0.27","expiration":"2017-03-26T08:38:57.980325718Z","ttl":60,"modifiedIndex":9886,"createdIndex":9886}} {"action":"set","node":{"key":"/pxc-cluster/wordpress-001/10.52.0.27/ipaddr","value":"10.52.0.27","expiration":"2017-03-26T08:38:28.01814818Z","ttl":30,"modifiedIndex":9887,"createdIndex":9887}} {"action":"set","node":{"key":"/pxc-cluster/wordpress-001/10.52.0.27/hostname","value":"percona-0","expiration":"2017-03-26T08:38:28.037188157Z","ttl":30,"modifiedIndex":9888,"createdIndex":9888}} {"action":"update","node":{"key":"/pxc-cluster/wordpress-001/10.52.0.27","dir":true,"expiration":"2017-03-26T08:38:28.054726795Z","ttl":30,"modifiedIndex":9889,"createdIndex":9887},"prevNode":{"key":"/pxc-cluster/wordpress-001/10.52.0.27","dir":true,"modifiedIndex":9887,"createdIndex":9887}} which doesn't work. From the second member of the cluster percona-1: 2017-03-26T08:37:44.069583Z 0 [Note] WSREP: (bd05a643, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.52.0.26:4567 2017-03-26T08:37:45.069756Z 0 [Note] WSREP: (bd05a643, 'tcp://0.0.0.0:4567') reconnecting to b7571ff8 (tcp://10.52.0.26:4567), attempt 0 2017-03-26T08:37:48.570332Z 0 [Note] WSREP: (bd05a643, 'tcp://0.0.0.0:4567') connection to peer 00000000 with addr tcp://10.52.0.26:4567 timed out, no messages seen in PT3S 2017-03-26T08:37:49.605089Z 0 [Note] WSREP: evs::proto(bd05a643, GATHER, view_id(REG,b7571ff8,3)) suspecting node: b7571ff8 2017-03-26T08:37:49.605276Z 0 [Note] WSREP: evs::proto(bd05a643, GATHER, view_id(REG,b7571ff8,3)) suspected node without join message, declaring inactive 2017-03-26T08:37:50.104676Z 0 [Note] WSREP: declaring c33d6a73 at tcp://10.52.2.33:4567 stable **New Info:** I restarted percona-0 again, and this time it somehow came up! After a few tries I realised the pod needs to restarted twice to come up i.e. after deleting it the first time, it comes up with the above errors, after deleting it the second time it comes up okay and syncs with the other members. Could this be because it was the first pod in the cluster? I've tested deleting the other pods but they all come back up okay. The issue only lies with percona-0. Also; Taking down all the pods at once, if my node was to crash, that's the situation where the pods don't come back up at all! I suspect it's because no state is saved to grastate.dat , i.e. seq_no remains -1 even though the global id may change, the pods exit with mysqld shutdown, and the following errors: jonathan@ubuntu:~/Projects/k8wp$ kubectl logs percona-2 | grep ERROR 2017-03-26T11:20:25.795085Z 0 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out) 2017-03-26T11:20:25.795276Z 0 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -110 (Connection timed out) 2017-03-26T11:20:25.795544Z 0 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1437: Failed to open channel 'wordpress-001' at 'gcomm://10.52.2.36': -110 (Connection timed out) 2017-03-26T11:20:25.795618Z 0 [ERROR] WSREP: gcs connect failed: Connection timed out 2017-03-26T11:20:25.795645Z 0 [ERROR] WSREP: wsrep::connect(gcomm://10.52.2.36) failed: 7 2017-03-26T11:20:25.795693Z 0 [ERROR] Aborting jonathan@ubuntu:~/Projects/k8wp$ kubectl logs percona-1 | grep ERROR 2017-03-26T11:20:27.093780Z 0 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out) 2017-03-26T11:20:27.093977Z 0 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -110 (Connection timed out) 2017-03-26T11:20:27.094145Z 0 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1437: Failed to open channel 'wordpress-001' at 'gcomm://10.52.1.49': -110 (Connection timed out) 2017-03-26T11:20:27.094200Z 0 [ERROR] WSREP: gcs connect failed: Connection timed out 2017-03-26T11:20:27.094227Z 0 [ERROR] WSREP: wsrep::connect(gcomm://10.52.1.49) failed: 7 2017-03-26T11:20:27.094247Z 0 [ERROR] Aborting jonathan@ubuntu:~/Projects/k8wp$ kubectl logs percona-0 | grep ERROR 2017-03-26T11:20:52.040214Z 0 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out) 2017-03-26T11:20:52.040279Z 0 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -110 (Connection timed out) 2017-03-26T11:20:52.040385Z 0 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1437: Failed to open channel 'wordpress-001' at 'gcomm://10.52.2.36': -110 (Connection timed out) 2017-03-26T11:20:52.040437Z 0 [ERROR] WSREP: gcs connect failed: Connection timed out 2017-03-26T11:20:52.040471Z 0 [ERROR] WSREP: wsrep::connect(gcomm://10.52.2.36) failed: 7 2017-03-26T11:20:52.040508Z 0 [ERROR] Aborting grastate.dat on deleting all pods: root@gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-0/grastate.dat # GALERA saved state version: 2.1 uuid: a91f70f2-11f8-11e7-8f3d-86c2e58790ac seqno: -1 safe_to_bootstrap: 0 root@gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-1/grastate.dat # GALERA saved state version: 2.1 uuid: a91f70f2-11f8-11e7-8f3d-86c2e58790ac seqno: -1 safe_to_bootstrap: 0 root@gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-2/grastate.dat # GALERA saved state version: 2.1 uuid: a91f70f2-11f8-11e7-8f3d-86c2e58790ac seqno: -1 safe_to_bootstrap: 0 No, gvwstate.dat
Jonathan (121 rep)
Mar 26, 2017, 09:18 AM • Last activity: Jul 23, 2025, 03:01 AM
0 votes
1 answers
21 views
mysqlbinlog not applying updates for PITR
I'm running Percona MySQL 8.0.42-33 on RHEL 9.6. I have a problem when attempting point-in-time-recovery, where I can restore the database from the mysqldump backup, but the application of the binlog isn't updating anything in the database. There are no errors, and verbose output shows the statement...
I'm running Percona MySQL 8.0.42-33 on RHEL 9.6. I have a problem when attempting point-in-time-recovery, where I can restore the database from the mysqldump backup, but the application of the binlog isn't updating anything in the database. There are no errors, and verbose output shows the statements being executed, but there are no changes in the database. At first I thought this could be specific to the database I was attempting to restore, but running similar tests in our sandbox instance produced the same result. This is using a replica server that has replication stopped, and I reset the replica and master data before starting this process if that makes any difference. Example Steps 1. create database test03 2. mysqldump --all-databases --log-error BACKUP_LOG --max-allowed-packet=1G --single-transaction --verbose --flush-logs --source-data=2 | gzip > BACKUP_DIR/BACKUP_FILE 3. `CREATE TABLE Persons ( PersonID int, LastName varchar(255), FirstName varchar(255), Address varchar(255), City varchar(255) );` 4. drop database test03 5. mysql < BACKUP_DIR/BACKUP_FILE 6. mysqlbinlog -vv log_bin.000002 --start-position=197 --stop-position=518 | mysql -v 7. use test03 8. show tables - Empty set (0.00 sec) In this case the test03 database is restored from backup, but the Persons table is not added when processing the binlog. I've verified that I'm not read-only or anything like that, and I can see the transaction in the binlog:
#250717  9:32:22 server id 5  end_log_pos 518 CRC32 0x798e87b9  Query   thread_id=125856        exec_time=0     error_code=0    Xid = 112497
use test03/*!*/;
SET TIMESTAMP=1752759142/*!*/;
SET @@session.pseudo_thread_id=125856/*!*/;
SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/;
SET @@session.sql_mode=1168113696/*!*/;
SET @@session.auto_increment_increment=2, @@session.auto_increment_offset=5/*!*/;
/*!\C utf8mb4 *//*!*/;
SET @@session.character_set_client=255,@@session.collation_connection=255,@@session.collation_server=255/*!*/;
SET @@session.lc_time_names=0/*!*/;
SET @@session.collation_database=DEFAULT/*!*/;
/*!80011 SET @@session.default_collation_for_utf8mb4=255*//*!*/;
/*!80013 SET @@session.sql_require_primary_key=0*//*!*/;
CREATE TABLE Persons (
    PersonID int,
    LastName varchar(255),
    FirstName varchar(255),
    Address varchar(255),
    City varchar(255)
)
/*!*/;
along with in the verbose output from mysql: -------------- CREATE TABLE Persons ( PersonID int, LastName varchar(255), FirstName varchar(255), Address varchar(255), City varchar(255) ) -------------- If someone can point me in the right direction it would be greatly appreciated. I'm at a loss for what to check next, and searches have produced no results thus far.
emaN_yalpsiD (1 rep)
Jul 17, 2025, 02:31 PM • Last activity: Jul 18, 2025, 04:23 PM
1 votes
1 answers
154 views
Updating gcache.size on a Three Node Cluster
Is it possible to update this value on Percona 5.7 without restarting a node? Also, do I need to bring the entire cluster down to update the setting, or can I update one node at a time?
Is it possible to update this value on Percona 5.7 without restarting a node? Also, do I need to bring the entire cluster down to update the setting, or can I update one node at a time?
Ndeb (11 rep)
Feb 14, 2023, 06:36 PM • Last activity: Jul 11, 2025, 10:10 AM
0 votes
1 answers
170 views
Unusually low insert rate on MySQL Slaves
I have 4 MySQL nodes replicating like this: M1 - S1 | M2 - S2 Only the M1 master is writing, the hardware is similar (the slaves are a bit beefier), they all run Percona 5.7. The trouble is that when M1 has a lot of inserts in a small time frame, the slaves lag behind. While M1 and M2 ar able to ins...
I have 4 MySQL nodes replicating like this: M1 - S1 | M2 - S2 Only the M1 master is writing, the hardware is similar (the slaves are a bit beefier), they all run Percona 5.7. The trouble is that when M1 has a lot of inserts in a small time frame, the slaves lag behind. While M1 and M2 ar able to insert at a rate of thousands per second, S2 seems limited ad 120 inserts/s. S1 varies between 70 and 180 but not more. Here's the slave status on S2 during this time: mysql> show slave status\G; Slave_IO_State: Waiting for master to send event Master_Host: ******* Master_User: ******* Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000799 Read_Master_Log_Pos: 677668480 Relay_Log_File: db2-relay-bin.000101 Relay_Log_Pos: 568744098 Relay_Master_Log_File: mysql-bin.000799 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 568743885 Relay_Log_Space: 677668945 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 1118 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 2 Master_UUID: 0b400f69-3459-16e6-a835-14feb5d6c592 Master_Info_File: /var/lib/mysql/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Reading event from the relay log Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: And here's the general mysql config of all the nodes: [mysqld] sql_mode=NO_ENGINE_SUBSTITUTION # MyISAM key-buffer-size = 32M # SAFETY max-allowed-packet = 16M max-connect-errors = 1000000 skip-name-resolve # DATA STORAGE datadir = /var/lib/mysql/ # BINARY LOGGING modified 4 slave replication server-id = 3 binlog_do_db = ******** log-bin = /var/lib/mysql/mysql-bin expire-logs-days = 14 sync-binlog = 1 binlog_format = ROW relay_log_info_repository=TABLE relay_log_recovery = ON # CACHES AND LIMITS tmp-table-size = 32M max-heap-table-size = 32M query-cache-type = 0 query-cache-size = 0 max-connections = 2000 thread-cache-size = 50 open-files-limit = 65535 table-definition-cache = 1024 table-open-cache = 2048 # INNODB # innodb-flush-method = O_DIRECT innodb-log-files-in-group = 2 innodb-log-file-size = 100M innodb-flush-log-at-trx-commit = 1 innodb-file-per-table = 1 innodb-buffer-pool-size = 90G innodb_buffer_pool_instances = 48 # LOGGING log-error = /var/lib/mysql/mysql-error.log slow-query-log = 1 slow-query-log-file = /var/log/mysql/mysql-slow.log Also, checking on the processes on S2 i only get the: Waiting for master to send event Reading event from the relay log Any help or idea to get to the bottom of this would be highly appreciated. Update: here's a visual enter image description here Update 2: it's not just the inserts, it's everything except selects: enter image description here
Bobby Tables (101 rep)
Feb 25, 2020, 08:47 AM • Last activity: Jul 5, 2025, 10:03 AM
1 votes
1 answers
185 views
What is the best way to migrate a multi-TB MySQL database from on-prem to gcloud Cloud SQL?
I have several multi-TB on-prem MySQL databases I need to migrate into Google Cloud's managed MySQL offering, Cloud SQL. I have migrated two of size ~1TB so far with mysqldump but this method is far too slow for the bigger databases. Ideally I would like to use Percona Xtrabackup but I don't know if...
I have several multi-TB on-prem MySQL databases I need to migrate into Google Cloud's managed MySQL offering, Cloud SQL. I have migrated two of size ~1TB so far with mysqldump but this method is far too slow for the bigger databases. Ideally I would like to use Percona Xtrabackup but I don't know if it is possible? Has anyone completed such a migration? What tools did you use? Thanks in advance
CClarke (133 rep)
Oct 26, 2022, 11:47 AM • Last activity: Jun 26, 2025, 03:03 PM
5 votes
1 answers
9812 views
How to prevent "ibdata files do not match the log sequence number"?
I am dealing with a very large set of databases that are all innodb. I've enountered this on mysql restart too many times for my comfort: `ibdata files do not match the log sequence number` But I've clearly watched mysql shutdown properly just before the restart when that message happens. Then it "r...
I am dealing with a very large set of databases that are all innodb. I've enountered this on mysql restart too many times for my comfort: ibdata files do not match the log sequence number But I've clearly watched mysql shutdown properly just before the restart when that message happens. Then it "repairs" right up to the original sequence number with nothing lost. What is the best approach to deal with and fix this permanently? Using Percona with innodb_file_per_table=1 Example log: InnoDB: Initializing buffer pool, size = 80.0G InnoDB: Completed initialization of buffer pool InnoDB: Highest supported file format is Barracuda. InnoDB: The log sequence numbers 475575972691 and 475575972691 in ibdata files do not match the log sequence number 925369860131 in the ib_logfiles! InnoDB: Database was not shutdown normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages InnoDB: from the doublewrite buffer... InnoDB: 128 rollback segment(s) are active. InnoDB: Waiting for purge to start InnoDB: Percona XtraDB started; log sequence number 925369860131 Note how the final log sequence number now matches what it thought was wrong in the first place, so there was 100% recovery? So why is the log sequence not being properly written to ibd? Is it possible shutdown is incomplete somehow? Thank you for any advice. ps. I always wonder if I should be asking this on serverfault or here? Is it okay I asked here?
ck_ (271 rep)
Aug 30, 2013, 05:04 PM • Last activity: Jun 26, 2025, 01:05 AM
1 votes
1 answers
213 views
DB locks being created on temp tables. Can percona ignore temp tables for replication
I have a stored procedure that I am using that creates a temp table. Recently I've noticed the logs are filling up with locks. I think the temp table is trying to be replicated but I am unsure. *** Victim TRANSACTION: TRANSACTION 0, ACTIVE 1 sec fetching rows mysql tables in use 3, locked 3 MySQL th...
I have a stored procedure that I am using that creates a temp table. Recently I've noticed the logs are filling up with locks. I think the temp table is trying to be replicated but I am unsure. *** Victim TRANSACTION: TRANSACTION 0, ACTIVE 1 sec fetching rows mysql tables in use 3, locked 3 MySQL thread id 1875027, OS thread handle 140436968662784, query id 41228278 dbserver 192.168.5.5 web_east Sending data INSERT IGNORE into temp_table select ... from db.table as f inner join data_temp temp on temp.id =f.id where f.date1 >= "2018-07-01 00:00:00" and f.date2 < "2019-08-01 00:00:00" *** WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 2272 page no 5986916 n bits 288 index db_name_date_idx of table db.table trx id 421941145506800 lock mode S locks gap before rec 2019-12-02T10:25:38.389502Z 10 [Note] WSREP: --------- CONFLICT DETECTED -------- 2019-12-02T10:25:38.389506Z 10 [Note] WSREP: cluster conflict due to high priority abort for threads: [Note] WSREP: Winning thread: THD: 10, mode: applier, state: executing, conflict: no conflict, seqno: 7653754 SQL: (null)
newdeveloper (133 rep)
Dec 2, 2019, 02:40 PM • Last activity: Jun 14, 2025, 05:04 PM
1 votes
1 answers
268 views
mysql client auto reconnect in case of ERROR 2013
I'm searching for a way how to instruct mysql client to reconnect in case of ERROR 2013. I'm testing PXC with ProxySQL and want to keep sql statements flowing from the client in case when the writer node get killed and new one is promoted. Is it possible for mysql client to reconnect when server goe...
I'm searching for a way how to instruct mysql client to reconnect in case of ERROR 2013. I'm testing PXC with ProxySQL and want to keep sql statements flowing from the client in case when the writer node get killed and new one is promoted. Is it possible for mysql client to reconnect when server goes down during the query? Can mysql client rerun the sql query (insert, update, ...)? With Sysbench it is possible if setting the --mysql-ignore-errors=all parameter. (from Comment) I'll be calling that SP from a custom lua script, where I'll test the 'error 2013' condition and in that case I'll rerun that query. Does this make sense, or the value of @err set by the error handler can't be passed to the script, because the session will just die, when the mysqld will get killed? DELIMITER // CREATE PROCEDURE sbtest.InsertIntoTable ( IN trnid INT, IN unixtime INT, OUT err INT) BEGIN DECLARE CONTINUE HANDLER FOR 2013 SET @err = 1; INSERT INTO sbtest.failover_test ( node, trn_id, unix_time ) VALUES (@@hostname, trnid, unixtime); END// my table: failover_test | CREATE TABLE failover_test ( id int NOT NULL AUTO_INCREMENT, node varchar(255) DEFAULT NULL, trn_id int DEFAULT NULL, unix_time int DEFAULT NULL, created_at timestamp NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (id) ) ENGINE=InnoDB AUTO_INCREMENT=116837 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci | -- CALL sbtest.InsertIntoTable(1, 1599207191, @err);SELECT @err
Sevak (11 rep)
Aug 27, 2020, 03:43 PM • Last activity: May 17, 2025, 11:04 PM
1 votes
1 answers
274 views
Does wsrep_cluster_address only accept the intranet IPs?
I was installing Percona XtraDB Cluster with 3 nodes. Each node has a unique Internet IP address. Following the docs [Quick Start Guide for Percona XtraDB Cluster][1], I got the first node running but failed to get the other two nodes working. So I am doubting that the `wsrep_cluster_address` only a...
I was installing Percona XtraDB Cluster with 3 nodes. Each node has a unique Internet IP address. Following the docs Quick Start Guide for Percona XtraDB Cluster , I got the first node running but failed to get the other two nodes working. So I am doubting that the wsrep_cluster_address only accepts the intranet IPs, is that true?
jungle (11 rep)
Jul 26, 2018, 11:56 AM • Last activity: May 16, 2025, 10:04 AM
0 votes
1 answers
296 views
Can't bring up slave from ec2-consistent-snapshot due to uncommitted prepared transaction
I'm struggling with bringing up a slave instance using a snapshot created by `ec2-consistent-snapshot`, in my log it's describing the fact that an unprocessed transaction exists, but isn't that what ec2-consistent-snapshot is supposed to prevent? My execution statement for creating snapshots is as f...
I'm struggling with bringing up a slave instance using a snapshot created by ec2-consistent-snapshot, in my log it's describing the fact that an unprocessed transaction exists, but isn't that what ec2-consistent-snapshot is supposed to prevent? My execution statement for creating snapshots is as follows... _(forgive the ansible variable placeholders)_ /usr/local/bin/ec2-consistent-snapshot-master/ec2-consistent-snapshot -q --aws-access-key-id {{ aws.access_key }} --aws-secret-access-key {{ aws.secret_key }} --region {{ aws.region }} --tag "Name={{ inventory_hostname }};Role={{ mysql_repl_role }}" --description "Database backup snapshot - {{ inventory_hostname_short }}" --freeze-filesystem /mnt/perconadata --percona --mysql-host localhost --mysql-socket /mnt/perconadata/mysql.sock --mysql-username root --mysql-password {{ mysql_root_password }} $VOLUME_ID And the log resulting from the failed attempt to bring it up on the slave is as follows... ( InnoDB: Doing recovery: scanned up to log sequence number 64107621643 InnoDB: Transaction 1057322289 was in the XA prepared state. InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 0 row operations to undo InnoDB: Trx id counter is 1057322752 2017-01-27 14:33:44 11313 [Note] InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percent: 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: Last MySQL binlog file position 0 33422772, file name mysql-bin.000011 2017-01-27 14:33:46 11313 [Note] InnoDB: 128 rollback segment(s) are active. InnoDB: Starting in background the rollback of uncommitted transactions 2017-01-27 14:33:46 7f3a90c75700 InnoDB: Rollback of non-prepared transactions completed 2017-01-27 14:33:46 11313 [Note] InnoDB: Waiting for purge to start 2017-01-27 14:33:46 11313 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.34-79.1 started; log sequence number 64107621643 CONFIG: num_threads=8 CONFIG: nonblocking=1(default) CONFIG: use_epoll=1 CONFIG: readsize=0 CONFIG: conn_per_thread=1024(default) CONFIG: for_write=0(default) CONFIG: plain_secret=(default) CONFIG: timeout=300 CONFIG: listen_backlog=32768 CONFIG: host=(default) CONFIG: port=9998 CONFIG: sndbuf=0 CONFIG: rcvbuf=0 CONFIG: stack_size=1048576(default) CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 CONFIG: num_threads=1 CONFIG: nonblocking=1(default) CONFIG: use_epoll=1 CONFIG: readsize=0 CONFIG: conn_per_thread=1024(default) CONFIG: for_write=1 CONFIG: plain_secret= CONFIG: timeout=300 CONFIG: listen_backlog=32768 CONFIG: host=(default) CONFIG: port=9999 CONFIG: sndbuf=0 CONFIG: rcvbuf=0 CONFIG: stack_size=1048576(default) CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 handlersocket: initialized 2017-01-27 14:33:46 7f3dfe768840 InnoDB: Starting recovery for XA transactions... 2017-01-27 14:33:46 7f3dfe768840 InnoDB: Transaction 1057322289 in prepared state after recovery 2017-01-27 14:33:46 7f3dfe768840 InnoDB: Transaction contains changes to 1 rows 2017-01-27 14:33:46 7f3dfe768840 InnoDB: 1 transactions in prepared state after recovery 2017-01-27 14:33:46 11313 [Note] Found 1 prepared transaction(s) in InnoDB 2017-01-27 14:33:46 11313 [ERROR] Found 1 prepared transactions! It means that mysqld was not shut down properly last time and critical recovery informat$ 2017-01-27 14:33:46 11313 [ERROR] Aborting My two thoughts are that I've missed something while creating the snapshot, or I've missed something bringing up the slave from this type of snapshot, so my question is... **Am I missing some important parameters that force mysql/percona to commit transactions prior to freezing the file system?** -- OR -- **Is there a parameter I should be using to bring the slave up to force it to act as if it's recovering from a crash?**
oucil (516 rep)
Jan 27, 2017, 08:30 PM • Last activity: May 13, 2025, 06:03 AM
0 votes
1 answers
302 views
Cannot prepare backup with innobackupex
I taken full backup from a Master server with innobackupex. Now I am restoring backup with `innobackupex` on a Slave server. I prepare backup with the command: ``` innobackupex --defaults-file=/var/lib/mysql/backup-my.cnf --apply-log /var/lib/mysql ``` This command end up with messages: ``` xtraback...
I taken full backup from a Master server with innobackupex. Now I am restoring backup with innobackupex on a Slave server. I prepare backup with the command:
innobackupex --defaults-file=/var/lib/mysql/backup-my.cnf --apply-log /var/lib/mysql
This command end up with messages:
xtrabackup: cd to /var/lib/mysql/
xtrabackup: This target seems to be not prepared yet.
xtrabackup: No valid checkpoint found.
xtrabackup: Error: xtrabackup_init_temp_log() failed.
What do I need to do to get rid of these messages and restore my backup? I use innobackupex version 2.3.10 and percona server 5.7.35. I have seen [this](https://dba.stackexchange.com/questions/42855/percona-xtrabackup-prepare-fails) question, but its answer did not help me.
Ongu (1 rep)
Dec 14, 2021, 07:07 AM • Last activity: May 13, 2025, 03:06 AM
1 votes
1 answers
276 views
Percona-Server: slow TokuDB queries after upgrading from 5.6 to 5.7. ANALYZE TABLE doesn't resolve the problem
After upgrading from Percona-TokuDB 5.6.29-76.2 to 5.7.19-17 we see some very slow queries on some tables without primary keys, but multiple non-unique indexes. The box we migrated to is pretty well equipped (768 GB RAM, PCIe SSDs). We used mysql_upgrade after migration. After investigating https://...
After upgrading from Percona-TokuDB 5.6.29-76.2 to 5.7.19-17 we see some very slow queries on some tables without primary keys, but multiple non-unique indexes. The box we migrated to is pretty well equipped (768 GB RAM, PCIe SSDs). We used mysql_upgrade after migration. After investigating https://dba.stackexchange.com/questions/135180/percona-5-7-tokudb-poor-query-performance-wrong-non-clustered-index-chosen we tried ANALYZE TABLE, even with RECOUNT_ROWS, REPAIR TABLE, ALTER TABLE *** FORCE without any effect. Typical table structure: CREATE TABLE letter_archiv_12375 ( user_id int(12) unsigned NOT NULL DEFAULT '0', letter_id mediumint(6) unsigned NOT NULL DEFAULT '0', crypt_id bigint(12) unsigned NOT NULL DEFAULT '0', mailerror tinyint(1) unsigned NOT NULL DEFAULT '0', unsubscribe tinyint(1) unsigned NOT NULL DEFAULT '0', send_date date NOT NULL, code varchar(255) NOT NULL DEFAULT '', KEY crypt_id (crypt_id), KEY letter_id (letter_id), KEY user_id (user_id) ) ENGINE=TokuDB A simple query like that takes 4 seconds on a table with 200m rows. UPDATE hoovie_1.letter_archiv_14167 SET unsubscribe = 1 WHERE letter_id = "784547" AND user_id = "2881564"; The cardinality values are correct. EXPLAIN will result in: id select_type table partitions type possible_keys key key_len ref rows filtered Extra 1 UPDATE letter_archiv_14167 NULL range letter_id,user_id letter_id 3 const 1 100.00 Using where The only solution is to remove and re-create at least one index. After dropping and re-creating the index letter_id the table will perform well (in 0.01 s). The EXPLAIN will change to id select_type table partitions type possible_keys key key_len ref rows filtered Extra 1 UPDATE letter_archiv_14167 NULL range user_id,letter_id user_id 4 const 99 100.00 Using where We have some thousands of TokuDB tables in production - a performance loss of factor 300-500 is a problem. So we are unsure to migrate to 5.7 - this behaviour could occur even after re-creating all indexes again. Any ideas?
Ralf Engler (11 rep)
Dec 18, 2017, 05:55 PM • Last activity: May 12, 2025, 12:00 AM
0 votes
1 answers
294 views
Does Percona Xtrabackup support RedHat release 4
I downloaded Percona's Xtrabackup rpm file for RedHAT. I attempted to install but hit into an error. The error seems to point to missing dependencies and libraries. I have no issue with RedHat 6 OS. I could not find the binaries in Percona's website for RedHat4. Is it supported in RedHat 4? Possible...
I downloaded Percona's Xtrabackup rpm file for RedHAT. I attempted to install but hit into an error. The error seems to point to missing dependencies and libraries. I have no issue with RedHat 6 OS. I could not find the binaries in Percona's website for RedHat4. Is it supported in RedHat 4? Possible for a workaround? `rpm -Uvh percona-xtrabackup-2.2.9-5067.el6.x86_64.rpm warning: only V3 signatures can be verified, skipping V4 signature error: Failed dependencies: libc.so.6(GLIBC_2.10)(64bit) is needed by percona-xtrabackup-2.2.9-5067.el6.x86_64 libz.so.1(ZLIB_1.2.0)(64bit) is needed by percona-xtrabackup-2.2.9-5067.el6.x86_64 perl(DBD::mysql) is needed by percona-xtrabackup-2.2.9-5067.el6.x86_64 rpmlib(FileDigests) <= 4.6.0-1 is needed by percona-xtrabackup-2.2.9-5067.el6.x86_64 rtld(GNU_HASH) is needed by percona-xtrabackup-2.2.9-5067.el6.x86_64 rpmlib(PayloadIsXz) <= 5.2-1 is needed by percona-xtrabackup-2.2.9-5067.el6.x86_64 Suggested resolutions: /var/spool/up2dateperl-DBD-MySQL-2.9004-3.1.x86_64.rpm`
Haans (107 rep)
Mar 9, 2015, 03:42 AM • Last activity: May 5, 2025, 11:05 PM
2 votes
1 answers
315 views
Percona Server 5.6.40 restarting with singal 11
We recently migrated our mysql to new hardware and it was running in slave mode for 15 days. We made it the master 11th June. On 13th June it restarted for the first time with signal 11. Stacktrace from 1st segfault - > 04:32:32 UTC - mysqld got signal 11 ; This could be because you hit a > bug. It...
We recently migrated our mysql to new hardware and it was running in slave mode for 15 days. We made it the master 11th June. On 13th June it restarted for the first time with signal 11. Stacktrace from 1st segfault - > 04:32:32 UTC - mysqld got signal 11 ; This could be because you hit a > bug. It is also possible that this binary or one of the libraries it > was linked against is corrupt, improperly built, or misconfigured. > This error can also be caused by malfunctioning hardware. We will try > our best to scrape up some info that will hopefully help diagnose the > problem, but since we have already crashed, something is definitely > wrong and this may fail. Please help us make Percona Server better by > reporting any bugs at http://bugs.percona.com/ > > key_buffer_size=33554432 read_buffer_size=131072 > max_used_connections=547 max_threads=5002 thread_count=430 > connection_count=430 It is possible that mysqld could use up to > key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = > 2023198 K bytes of memory Hope that's ok; if not, decrease some > variables in the equation. > > Thread pointer: 0x2a83900 Attempting backtrace. You can use the > following information to find out where mysqld died. If you see no > messages after this, something went terribly wrong... stack_bottom = > 7f86b8c24e88 thread_stack 0x30000 > /usr/sbin/mysqld(my_print_stacktrace+0x2c)[0x8c66bc] > /usr/sbin/mysqld(handle_fatal_signal+0x469)[0x64d079] > /lib/x86_64-linux-gnu/libpthread.so.0(+0xf890)[0x7f8e9871c890] > /usr/sbin/mysqld(_Z25gtid_pre_statement_checksPK3THD+0x0)[0x848820] > /usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x316)[0x6cb8a6] > /usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x5d8)[0x6d15e8] > /usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x117f)[0x6d2eaf] > /usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x1a2)[0x69f962] > /usr/sbin/mysqld(handle_one_connection+0x40)[0x69fa00] > /usr/sbin/mysqld(pfs_spawn_thread+0x146)[0x8fbfe6] > /lib/x86_64-linux-gnu/libpthread.so.0(+0x8064)[0x7f8e98715064] > /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f8e9675462d] > > Trying to get some variables. Some pointers may be invalid and cause > the dump to abort. Query (7f85e012ee80): is an invalid pointer > Connection ID (thread ID): 247827 Status: NOT_KILLED Again after 2 days mysql restarted 3 times with the following traces. Dump 2 - 02:15:36 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Please help us make Percona Server better by reporting any bugs at http://bugs.percona.com/ key_buffer_size=33554432 read_buffer_size=131072 max_used_connections=723 max_threads=5002 thread_count=358 connection_count=358 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 2023198 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x2b73a90 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f0e12d45e88 thread_stack 0x30000 /usr/sbin/mysqld(my_print_stacktrace+0x2c)[0x8c66bc] /usr/sbin/mysqld(handle_fatal_signal+0x469)[0x64d079] /lib/x86_64-linux-gnu/libpthread.so.0(+0xf890)[0x7f15f20b9890] /usr/sbin/mysqld[0x64b000] /usr/sbin/mysqld(vio_io_wait+0x76)[0xb77b56] /usr/sbin/mysqld(vio_socket_io_wait+0x18)[0xb77bf8] /usr/sbin/mysqld(vio_read+0xca)[0xb77cda] /usr/sbin/mysqld[0x642203] /usr/sbin/mysqld[0x6424f4] /usr/sbin/mysqld(my_net_read+0x304)[0x6432e4] /usr/sbin/mysqld(_Z10do_commandP3THD+0xca)[0x6d413a] /usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x1a2)[0x69f962] /usr/sbin/mysqld(handle_one_connection+0x40)[0x69fa00] /usr/sbin/mysqld(pfs_spawn_thread+0x146)[0x8fbfe6] /lib/x86_64-linux-gnu/libpthread.so.0(+0x8064)[0x7f15f20b2064] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f15f00f162d] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (0): is an invalid pointer Connection ID (thread ID): 40900 Status: NOT_KILLED Dump 3 - 02:36:32 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Please help us make Percona Server better by reporting any bugs at http://bugs.percona.com/ key_buffer_size=33554432 read_buffer_size=131072 max_used_connections=401 max_threads=5002 thread_count=369 connection_count=369 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 2023198 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x32448f0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f2fb82c3e88 thread_stack 0x30000 /usr/sbin/mysqld(my_print_stacktrace+0x2c)[0x8c66bc] /usr/sbin/mysqld(handle_fatal_signal+0x469)[0x64d079] /lib/x86_64-linux-gnu/libpthread.so.0(+0xf890)[0x7f3792426890] /usr/sbin/mysqld(_ZN9PROFILING15start_new_queryEPKc+0x0)[0x6e60a0] /usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x47)[0x6d1d77] /usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x1a2)[0x69f962] /usr/sbin/mysqld(handle_one_connection+0x40)[0x69fa00] /usr/sbin/mysqld(pfs_spawn_thread+0x146)[0x8fbfe6] /lib/x86_64-linux-gnu/libpthread.so.0(+0x8064)[0x7f379241f064] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f379045e62d] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (0): is an invalid pointer Connection ID (thread ID): 482 Status: NOT_KILLED After these restarts we did a master slave switch and took this box out of active cluster. We were running simple sysbench oltp_read_write test to see if it happens again. 2 days after starting the benchmark it is happened again on the same machine with trace - 10:51:18 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Please help us make Percona Server better by reporting any bugs at http://bugs.percona.com/ key_buffer_size=33554432 read_buffer_size=131072 max_used_connections=4 max_threads=5002 thread_count=3 connection_count=3 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 2023198 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x22c8270 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f65ec060e88 thread_stack 0x30000 /usr/sbin/mysqld(my_print_stacktrace+0x2c)[0x8c66bc] /usr/sbin/mysqld(handle_fatal_signal+0x469)[0x64d079] /lib/x86_64-linux-gnu/libpthread.so.0(+0xf890)[0x7f6644371890] /lib/x86_64-linux-gnu/libc.so.6(__poll+0x0)[0x7f66423a0ac0] /usr/sbin/mysqld(vio_io_wait+0x86)[0xb77b66] /usr/sbin/mysqld(vio_socket_io_wait+0x18)[0xb77bf8] /usr/sbin/mysqld(vio_read+0xca)[0xb77cda] /usr/sbin/mysqld[0x642203] /usr/sbin/mysqld[0x6424f4] /usr/sbin/mysqld(my_net_read+0x304)[0x6432e4] /usr/sbin/mysqld(_Z10do_commandP3THD+0xca)[0x6d413a] /usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x1a2)[0x69f962] /usr/sbin/mysqld(handle_one_connection+0x40)[0x69fa00] /usr/sbin/mysqld(pfs_spawn_thread+0x146)[0x8fbfe6] /lib/x86_64-linux-gnu/libpthread.so.0(+0x8064)[0x7f664436a064] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f66423a962d] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (0): is an invalid pointer Connection ID (thread ID): 32 Status: NOT_KILLED Logs from sysbench - FATAL: mysql_drv_query() returned error 2013 (Lost connection to MySQL server during query) for query 'SELECT SUM(k) FROM sbtest20 WHERE id BETWEEN 5008643 AND 5008742' FATAL: mysql_drv_query() returned error 2013 (Lost connection to MySQL server during query) for query 'DELETE FROM sbtest4 WHERE id=5025943' FATAL: mysql_drv_query() returned error 2013 (Lost connection to MySQL server during query) for query 'SELECT SUM(k) FROM sbtest15 WHERE id BETWEEN 5049412 AND 5049511' FATAL: `thread_run' function failed: /usr/share/sysbench/oltp_common.lua:432: SQL error, errno = 2013, state = 'HY000': Lost connection to MySQL server during query FATAL: `thread_run' function failed: /usr/share/sysbench/oltp_common.lua:487: SQL error, errno = 2013, state = 'HY000': Lost connection to MySQL server during query FATAL: `thread_run' function failed: /usr/share/sysbench/oltp_common.lua:432: SQL error, errno = 2013, state = 'HY000': Lost connection to MySQL server during query Error in my_thread_global_end(): 3 threads didn't exit Our new master also restarted today with similar trace. Can someone help us debug this? We cannot enable general query log since the load is very high and it fills up the disk. We cannot deterministically reproduce this as well. Mysql version is - 5.6.40-84.0-log Debian version is - Linux version 3.16.0-6-amd64 (debian-kernel@lists.debian.org) (gcc version 4.9.2 (Debian 4.9.2-10+deb8u1) ) #1 SMP Debian 3.16.56-1+deb8u1 (2018-05-08) Machine memory is 40GB Innodb buffer pool is 30GB
Sarthak Shrivastava (21 rep)
Jun 17, 2019, 02:36 PM • Last activity: Apr 29, 2025, 01:01 AM
0 votes
1 answers
320 views
Percona XtraDB cluster backup solution
We have a 3 node percona xtradb cluster and two slaves attached to it. One of the slaves we want to use it for taking backups. We have GTID based replication setup between the slave and the Xtradb cluster. My question is if we are adding a new node to the xtradb cluster can we use the backups from t...
We have a 3 node percona xtradb cluster and two slaves attached to it. One of the slaves we want to use it for taking backups. We have GTID based replication setup between the slave and the Xtradb cluster. My question is if we are adding a new node to the xtradb cluster can we use the backups from the slave and restore on the new node. Also would we be able to avoid SST and do IST instead. I am not sure how that part works since the backups are from a slave. Any help will be appreciated.
Lamp Consultants (13 rep)
Jun 13, 2017, 05:25 PM • Last activity: Apr 25, 2025, 10:03 PM
0 votes
1 answers
839 views
Is it possible to setup MySQL replication through ProxySQL?
We currently have replication working through haproxy so that if a node goes down replication continues with one of the other nodes. We're trying to completely replace haproxy with proxysql. Due to this I'm trying to replicate this functionality in proxysql. Test: 3 on-prem nodes setup as a Percona...
We currently have replication working through haproxy so that if a node goes down replication continues with one of the other nodes. We're trying to completely replace haproxy with proxysql. Due to this I'm trying to replicate this functionality in proxysql. Test: 3 on-prem nodes setup as a Percona Galera cluster. 1 on-prem ProxySQL node pointing to the Percona cluster 1 aws EC2 node with Percona installed. I setup replication between the EC2 node, and one of the 3 on-prem nodes without issue. I can also connect to the DB through proxysql from the aws node. When ever I stop the slave, and run the following: CHANGE MASTER TO MASTER_HOST="XX.XXX.XXX.193", MASTER_USER="[username]", MASTER_PASSWORD="[password]", MASTER_AUTO_POSITION = 1, MASTER_PORT = 6033; I get the following error when I run show slave status: Slave_IO_State: Waiting to reconnect after a failed registration on master Slave_IO_Running: Connecting Slave_SQL_Running: Yes Last_IO_Errno: 1597 Last_IO_Error: Master command COM_REGISTER_SLAVE failed: Lost connection to MySQL server during query (Errno: 2013) I can then swap back to directly connecting between DB nodes and replication works fine. Is the mysql backend of proxysql preventing this from working in the same way it does with haproxy?
Chris Batchelor (1 rep)
Apr 18, 2019, 07:35 PM • Last activity: Apr 22, 2025, 12:05 AM
2 votes
1 answers
506 views
Is it safe to use pt-online-schema-change in a multimaster environment?
I have 2 MySQL servers with row-based replication between them. Both of them are masters and slaves for each other (active-active master-master setup). If I understand it correctly pt-osc creates triggers to catch any changes while running. But from what I know triggers are not fired in a row-based...
I have 2 MySQL servers with row-based replication between them. Both of them are masters and slaves for each other (active-active master-master setup). If I understand it correctly pt-osc creates triggers to catch any changes while running. But from what I know triggers are not fired in a row-based replication environment. So I guess pt-osc is not able to catch changes made on the second master during the change, is it? EDIT: While doing some tests and I saw that pt-osc was creating triggers on both masters which would cover changes from both sides. Still I'm quite unsure if I can safely do online changes in this environment.
Mrskman (121 rep)
Jan 15, 2019, 02:10 PM • Last activity: Apr 21, 2025, 05:04 PM
Showing page 1 of 20 total questions