Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
188 views
TokuDB Hot Column Expansion
I need to expand a varchar field length from 255 to 4000. I am using tokudb_version: `tokudb-7.5.8 running on Linux 3.16.0-60-generic #80~14.04.1-Ubuntu SMP Wed Jan 20 13:37:48 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux` I know TokuDB supports Hot Column operations but this is not working for me ( numb...
I need to expand a varchar field length from 255 to 4000. I am using tokudb_version: tokudb-7.5.8 running on Linux 3.16.0-60-generic #80~14.04.1-Ubuntu SMP Wed Jan 20 13:37:48 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux I know TokuDB supports Hot Column operations but this is not working for me ( number of rows ~ 210 million) Show variables file: https://drive.google.com/file/d/0B5noFLrbjDjzSW9wdnVjb095Q0U/view?usp=sharing Alter command alter table test_table modify test_column varchar(4000); Show processlist: mysql> show processlist; +----+------+-----------+---------------+---------+------+---------------------------------------------------------+------------------------------------------------------------+-----------+---------------+ | Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined | +----+------+-----------+---------------+---------+------+---------------------------------------------------------+------------------------------------------------------------+-----------+---------------+ | 6 | root | localhost | NULL | Query | 0 | init | show processlist | 0 | 0 | | 7 | root | localhost | test | Query | 461 | Queried about 2445001 rows, Inserted about 2445000 rows | alter table test_table modify test_column varchar(4000) | 0 | 0 | +----+------+-----------+---------------+---------+------+---------------------------------------------------------+------------------------------------------------------------+-----------+---------------+ 2 rows in set (0.00 sec) Any idea which options I might need to set because it's currently processing at ~ 6k per second(which might take me ~10 hours)
Kyalo (11 rep)
Jul 26, 2016, 09:16 AM • Last activity: Jun 30, 2025, 10:04 AM
1 votes
1 answers
228 views
TokuDB table stats row count decreases to 1
I have a few Percona 5.6.30 servers with a TokuDB table that has ~120 million rows. On some of these servers, row count in `SHOW TABLE STATUS` mysteriously decreases until is hits 1. This is accompanied by rather unpleasant performance reduction and happens on both master and slave servers. I've tri...
I have a few Percona 5.6.30 servers with a TokuDB table that has ~120 million rows. On some of these servers, row count in SHOW TABLE STATUS mysteriously decreases until is hits 1. This is accompanied by rather unpleasant performance reduction and happens on both master and slave servers. I've tried to fix this by running ANALYZE TABLE, running OPTIMIZE TABLE, restarting the affected server, running ALTER TABLE ... FORCE. None of these has had any effect. The only two things that help are ALTER TABLE ... ENGINE=TokuDB and recreating the table from dumped data. These bring row count back to sane values, after which they begin to decrease again. Any hints about what might have caused this issue and how to fix it would be helpful.
che (119 rep)
May 25, 2016, 08:32 PM • Last activity: Jun 10, 2025, 11:03 PM
1 votes
1 answers
276 views
Percona-Server: slow TokuDB queries after upgrading from 5.6 to 5.7. ANALYZE TABLE doesn't resolve the problem
After upgrading from Percona-TokuDB 5.6.29-76.2 to 5.7.19-17 we see some very slow queries on some tables without primary keys, but multiple non-unique indexes. The box we migrated to is pretty well equipped (768 GB RAM, PCIe SSDs). We used mysql_upgrade after migration. After investigating https://...
After upgrading from Percona-TokuDB 5.6.29-76.2 to 5.7.19-17 we see some very slow queries on some tables without primary keys, but multiple non-unique indexes. The box we migrated to is pretty well equipped (768 GB RAM, PCIe SSDs). We used mysql_upgrade after migration. After investigating https://dba.stackexchange.com/questions/135180/percona-5-7-tokudb-poor-query-performance-wrong-non-clustered-index-chosen we tried ANALYZE TABLE, even with RECOUNT_ROWS, REPAIR TABLE, ALTER TABLE *** FORCE without any effect. Typical table structure: CREATE TABLE letter_archiv_12375 ( user_id int(12) unsigned NOT NULL DEFAULT '0', letter_id mediumint(6) unsigned NOT NULL DEFAULT '0', crypt_id bigint(12) unsigned NOT NULL DEFAULT '0', mailerror tinyint(1) unsigned NOT NULL DEFAULT '0', unsubscribe tinyint(1) unsigned NOT NULL DEFAULT '0', send_date date NOT NULL, code varchar(255) NOT NULL DEFAULT '', KEY crypt_id (crypt_id), KEY letter_id (letter_id), KEY user_id (user_id) ) ENGINE=TokuDB A simple query like that takes 4 seconds on a table with 200m rows. UPDATE hoovie_1.letter_archiv_14167 SET unsubscribe = 1 WHERE letter_id = "784547" AND user_id = "2881564"; The cardinality values are correct. EXPLAIN will result in: id select_type table partitions type possible_keys key key_len ref rows filtered Extra 1 UPDATE letter_archiv_14167 NULL range letter_id,user_id letter_id 3 const 1 100.00 Using where The only solution is to remove and re-create at least one index. After dropping and re-creating the index letter_id the table will perform well (in 0.01 s). The EXPLAIN will change to id select_type table partitions type possible_keys key key_len ref rows filtered Extra 1 UPDATE letter_archiv_14167 NULL range user_id,letter_id user_id 4 const 99 100.00 Using where We have some thousands of TokuDB tables in production - a performance loss of factor 300-500 is a problem. So we are unsure to migrate to 5.7 - this behaviour could occur even after re-creating all indexes again. Any ideas?
Ralf Engler (11 rep)
Dec 18, 2017, 05:55 PM • Last activity: May 12, 2025, 12:00 AM
1 votes
1 answers
349 views
Query sometime not using index
Starting in the middle of last night (of course) I have a query that stops using an index and when that happens, it takes over an hour to complete vs. about 3 seconds when it uses the index. This query has been run for more than a year with no issues until last night. What I have been able to figure...
Starting in the middle of last night (of course) I have a query that stops using an index and when that happens, it takes over an hour to complete vs. about 3 seconds when it uses the index. This query has been run for more than a year with no issues until last night. What I have been able to figure out is that the query is using an index sometimes and not others; using explain. It has been slow for 2 hours, then fast for 1 hour and now slow again, etc. When the query is running fast, explain tells me the query is using the key: builder_row_id When it is running slow, it has no key and the Extra has: Using join buffer (Block Nested Loop) Here are the 2 rows from explain, sorry about the formatting: id,select_type,table,partitions,type,possible_keys,key,key_len,ref,rows,filtered,Extra 1,SIMPLE,e8_,,ALL,builder_row_id,,,,1,100,Using where; Using join buffer (Block Nested Loop) 1,SIMPLE,e8_,,ref,builder_row_id,builder_row_id,6,"my_db_name.e6_.builder_block_id,const",4,100, SELECT e0_.id AS id_0, e0_.name AS name_1, e0_.content_id AS content_id_2, e0_.from_label AS from_label_3, e0_.support_address AS support_address_4, e0_.actual_from_label AS actual_from_label_5, e0_.actual_from_address AS actual_from_address_6, e0_.enable_wysiwyg AS enable_wysiwyg_7, e0_.enable_conversation AS enable_conversation_8, e0_.folder_id AS folder_id_9, e0_.enable_transactional AS enable_transactional_10, e0_.type_id AS type_id_11, e0_.utm_content AS utm_content_12, e1_.builder_style_id AS builder_style_id_13, e1_.id AS id_14, e1_.builder_style_key AS builder_style_key_15, e1_.builder_style_value AS builder_style_value_16, e1_.builder_style_delete_status AS builder_style_delete_status_17, e2_.content_id AS content_id_18, e2_.content_text AS content_text_19, e2_.content_html AS content_html_20, e2_.content_subject AS content_subject_21, e2_.content_preview_png AS content_preview_png_22, e3_.builder_region_id AS builder_region_id_23, e3_.id AS id_24, e3_.builder_region_name AS builder_region_name_25, e3_.builder_region_type_id AS builder_region_type_id_26, e3_.builder_region_delete_status AS builder_region_delete_status_27, e3_.builder_region_sort_order AS builder_region_sort_order_28, e3_.builder_ui_id AS builder_ui_id_29, e4_.builder_region_style_id AS builder_region_style_id_30, e4_.builder_region_id AS builder_region_id_31, e4_.builder_region_style_key AS builder_region_style_key_32, e4_.builder_region_style_value AS builder_region_style_value_33, e4_.builder_ui_id AS builder_ui_id_34, e4_.builder_region_style_delete_status AS builder_region_style_delete_status_35, e5_.builder_row_id AS builder_row_id_36, e5_.builder_region_id AS builder_region_id_37, e5_.builder_row_type_id AS builder_row_type_id_38, e5_.builder_row_delete_status AS builder_row_delete_status_39, e5_.builder_row_sort_order AS builder_row_sort_order_40, e5_.builder_ui_id AS builder_ui_id_41, e6_.builder_block_id AS builder_block_id_42, e6_.builder_row_id AS builder_row_id_43, e6_.builder_block_type_id AS builder_block_type_id_44, e6_.builder_block_delete_status AS builder_block_delete_status_45, e6_.builder_block_sort_order AS builder_block_sort_order_46, e6_.builder_ui_id AS builder_ui_id_47, e7_.builder_block_attribute_id AS builder_block_attribute_id_48, e7_.builder_block_id AS builder_block_id_49, e7_.builder_block_attribute_key AS builder_block_attribute_key_50, e7_.builder_block_attribute_value AS builder_block_attribute_value_51, e7_.builder_block_attribute_delete_status AS builder_block_attribute_delete_status_52, e8_.builder_column_id AS builder_column_id_53, e8_.builder_block_id AS builder_block_id_54, e8_.parent_builder_column_id AS parent_builder_column_id_55, e8_.builder_column_type_id AS builder_column_type_id_56, e8_.builder_column_delete_status AS builder_column_delete_status_57, e8_.builder_column_sort_order AS builder_column_sort_order_58, e8_.builder_ui_id AS builder_ui_id_59, e9_.builder_column_style_id AS builder_column_style_id_60, e9_.builder_column_style_key AS builder_column_style_key_61, e9_.builder_column_style_value AS builder_column_style_value_62, e9_.builder_ui_id AS builder_ui_id_63, e9_.builder_column_style_delete_status AS builder_column_style_delete_status_64, e10_.builder_column_attribute_id AS builder_column_attribute_id_65, e10_.builder_column_attribute_key AS builder_column_attribute_key_66, e10_.builder_column_attribute_value AS builder_column_attribute_value_67, e10_.builder_ui_id AS builder_ui_id_68, e10_.builder_column_attribute_delete_status AS builder_column_attribute_delete_status_69, e11_.builder_column_conf_id AS builder_column_conf_id_70, e11_.builder_column_conf_key AS builder_column_conf_key_71, e11_.builder_column_conf_value AS builder_column_conf_value_72, e11_.builder_ui_id AS builder_ui_id_73, e11_.builder_column_conf_delete_status AS builder_column_conf_delete_status_74, e12_.builder_column_id AS builder_column_id_75, e12_.builder_block_id AS builder_block_id_76, e12_.parent_builder_column_id AS parent_builder_column_id_77, e12_.builder_column_type_id AS builder_column_type_id_78, e12_.builder_column_delete_status AS builder_column_delete_status_79, e12_.builder_column_sort_order AS builder_column_sort_order_80, e12_.builder_ui_id AS builder_ui_id_81, e0_.content_id AS content_id_82, e0_.product_id AS product_id_83, e0_.unsubscribe_message_id AS unsubscribe_message_id_84, e0_.unsubscribe_language_id AS unsubscribe_language_id_85, e0_.rbac_role_id AS rbac_role_id_86, e0_.folder_id AS folder_id_87, e1_.id AS id_88, e3_.id AS id_89, e4_.builder_region_id AS builder_region_id_90, e5_.builder_region_id AS builder_region_id_91, e6_.builder_row_id AS builder_row_id_92, e7_.builder_block_id AS builder_block_id_93, e8_.builder_block_id AS builder_block_id_94, e8_.parent_builder_column_id AS parent_builder_column_id_95, e9_.builder_column_id AS builder_column_id_96, e10_.builder_column_id AS builder_column_id_97, e11_.builder_column_id AS builder_column_id_98, e12_.builder_block_id AS builder_block_id_99, e12_.parent_builder_column_id AS parent_builder_column_id_100 FROM email e0_ LEFT JOIN builder_style e1_ ON e0_.id = e1_.id AND (e1_.builder_style_delete_status = 0) LEFT JOIN content e2_ ON e0_.content_id = e2_.content_id LEFT JOIN builder_region e3_ ON e0_.id = e3_.id AND (e3_.builder_region_delete_status = 0) LEFT JOIN builder_region_style e4_ ON e3_.builder_region_id = e4_.builder_region_id AND (e4_.builder_region_style_delete_status = 0) LEFT JOIN builder_row e5_ ON e3_.builder_region_id = e5_.builder_region_id AND (e5_.builder_row_delete_status = 0) LEFT JOIN builder_block e6_ ON e5_.builder_row_id = e6_.builder_row_id AND (e6_.builder_block_delete_status = 0) LEFT JOIN builder_block_attribute e7_ ON e6_.builder_block_id = e7_.builder_block_id AND (e7_.builder_block_attribute_delete_status = 0) LEFT JOIN builder_column e8_ ON e6_.builder_block_id = e8_.builder_block_id AND (e8_.builder_column_delete_status = 0) LEFT JOIN builder_column_style e9_ ON e8_.builder_column_id = e9_.builder_column_id AND (e9_.builder_column_style_delete_status = 0) LEFT JOIN builder_column_attribute e10_ ON e8_.builder_column_id = e10_.builder_column_id AND (e10_.builder_column_attribute_delete_status = 0) LEFT JOIN builder_column_conf e11_ ON e8_.builder_column_id = e11_.builder_column_id AND (e11_.builder_column_conf_delete_status = 0) AND (e11_.builder_column_conf_delete_status = 0) LEFT JOIN builder_column e12_ ON e8_.builder_column_id = e12_.parent_builder_column_id AND (e12_.builder_column_delete_status = 0) WHERE e0_.delete_status = 0 AND e0_.product_id = xxxxx AND e0_.id = xxxxx ORDER BY e3_.builder_region_sort_order ASC, e5_.builder_row_sort_order ASC, e6_.builder_block_sort_order ASC, e8_.builder_column_sort_order ASC Can someone point me in the right direction on what is going on here? Here is the full explain: id,select_type,table,partitions,type,possible_keys,key,key_len,ref,rows,filtered,Extra 1,SIMPLE,e0_,,const,PRIMARY,id,id_2,product_id,customer_id,delete_status,rbac_role_id,delete_status_2,PRIMARY,8,const,1,100,"Using temporary; Using filesort" 1,SIMPLE,e1_,,ref,id,id,5,const,const,3,100,Using where 1,SIMPLE,e2_,,const,PRIMARY,PRIMARY,8,const,1,100, 1,SIMPLE,e3_,,ref,delete,delete,5,const,const,4,100,Using where 1,SIMPLE,e4_,,ref,builder_region_id,builder_region_id,5,"my_db_name.e3_.builder_region_id,const',1,100, 1,SIMPLE,e5_,,ref,builder_block_id,builder_block_id,5,"my_db_name.e3_.builder_region_id,const",1,100, 1,SIMPLE,e6_,,ref,builder_region_id,builder_region_id,6,"my_db_name.e5_.builder_row_id,const",1,100, 1,SIMPLE,e7_,,ref,builder_block_id,builder_block_id,5,"my_db_name.e6_.builder_block_id,const",1,100, 1,SIMPLE,e8_,,ALL,builder_row_id,,,,8,100,Using where; Using join buffer (Block Nested Loop) 1,SIMPLE,e9_,,ref,builder_column_id,builder_column_id,5,"my_db_name.e8_.builder_column_id,const',5,100, 1,SIMPLE,e10_,,ref,builder_column_id,builder_column_id,5,"my_db_name.e8_.builder_column_id,const",2,100, 1,SIMPLE,e11_,,ref,builder_column_id,builder_column_id,5,"my_db_name.e8_.builder_column_id,const",1,100, 1,SIMPLE,e12_,,ALL,child_col,,,,8,100,Using where; Using join buffer (Block Nested Loop) I just realized I posted the wrong create table: CREATE TABLE builder_column ( builder_column_id int(11) unsigned NOT NULL AUTO_INCREMENT, builder_block_id int(11) unsigned DEFAULT NULL, builder_column_type_id tinyint(3) NOT NULL, builder_column_delete_status tinyint(1) unsigned NOT NULL, builder_column_sort_order tinyint(3) unsigned NOT NULL, builder_column_create_date timestamp NOT NULL DEFAULT '0000-00-00 00:00:00', builder_column_modify_date timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, builder_ui_id varchar(200) DEFAULT '', parent_builder_column_id int(11) unsigned DEFAULT NULL, builder_column_flags tinyint(3) unsigned DEFAULT '0', PRIMARY KEY (builder_column_id), KEY builder_row_id (builder_block_id,builder_column_delete_status), KEY child_col (parent_builder_column_id,builder_column_delete_status) ) ENGINE=TokuDB AUTO_INCREMENT=901184 DEFAULT CHARSET=utf8 ROW_FORMAT=TOKUDB_SNAPPY; UPDATE: I have found that if I add: FORCE INDEX FOR JOIN (email_builder_row_id) and FORCE INDEX FOR JOIN (child_col) it will use the index and returns fast.
Jeff Ward (11 rep)
Nov 3, 2017, 03:30 PM • Last activity: May 10, 2025, 03:05 AM
3 votes
1 answers
753 views
TokuDB vs InnoDB Database data size
I have two percona servers, one running with TokuDB tables and the other with InnoDB tables. On each server, i have create a single database, with a single table in it containing 10 millions records using sysbench. But using the query below, i noticed that the database on the TOkuDB server was way t...
I have two percona servers, one running with TokuDB tables and the other with InnoDB tables. On each server, i have create a single database, with a single table in it containing 10 millions records using sysbench. But using the query below, i noticed that the database on the TOkuDB server was way too large compared to the database on the server with InnoDb tables. The TOkuDB table has a clustering index, and the InnoDB table has a covered index (covering all fields except for the primary key). So here is the table structure for the InnoDB table: CREATE TABLE sbtest1 ( id int(10) unsigned NOT NULL AUTO_INCREMENT, k int(10) unsigned NOT NULL DEFAULT '0', c char(120) NOT NULL DEFAULT '', pad char(60) NOT NULL DEFAULT '', PRIMARY KEY (id), KEY k_1 (k), KEY covered (pad,k,c) ) ENGINE=InnoDB AUTO_INCREMENT=10000001 DEFAULT CHARSET=utf8mb4 MAX_ROWS=1000000 ROW_FORMAT=COMPRESSED And the data size is shown below with the main interest being on database subtest, which contains one InnoDD table with 10 million records: mysql> SELECT table_schema "Data Base Name", -> sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB", -> (index_length) / 1024 / 1024 "Index data in MB" -> FROM information_schema.TABLES GROUP BY table_schema ; +--------------------+----------------------+------------------+ | Data Base Name | Data Base Size in MB | Index data in MB | +--------------------+----------------------+------------------+ | information_schema | 0.00976563 | 0.00000000 | | mysql | 0.77099419 | 0.00390625 | | performance_schema | 0.00000000 | 0.00000000 | | sbtest | 1203.36718750 | 67.36718750 | | test | 0.03906250 | 0.00000000 | +--------------------+----------------------+------------------+ 5 rows in set (0.02 sec) The second table is a TokuDB table also with 10 million records and running on server 2. CREATE TABLE sbtest1 ( id int(10) unsigned NOT NULL AUTO_INCREMENT, k int(10) unsigned NOT NULL DEFAULT '0', c char(120) NOT NULL DEFAULT '', pad char(60) NOT NULL DEFAULT '', PRIMARY KEY (id), KEY k_1 (k), CLUSTERING KEY pad (pad) ) ENGINE=TokuDB AUTO_INCREMENT=10000001 DEFAULT CHARSET=utf8mb4 MAX_ROWS=1000000 ROW_FORMAT=TOKUDB_QUICKLZ Data size mysql> SELECT table_schema "Data Base Name", -> sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB", -> (index_length) / 1024 / 1024 "Index data in MB" -> FROM information_schema.TABLES GROUP BY table_schema ; +--------------------+----------------------+------------------+ | Data Base Name | Data Base Size in MB | Index data in MB | +--------------------+----------------------+------------------+ | information_schema | 0.00976563 | 0.00000000 | | mysql | 0.77224922 | 0.00390625 | | performance_schema | 0.00000000 | 0.00000000 | | sbtest | 12293.65114307 | 5340.57617188 | | test | 0.00000000 | 0.00000000 | +--------------------+----------------------+------------------+ 5 rows in set (0.02 sec) As can seen, the TokuDB is using Quicklz compression, which should give slightly better compression that the InnoDB compressed mechanism. So why does the TOkuDB table appear to have way much more data than the InnoDB table? Both servers had the same amount of memory and disc space and hardware before the tables were created. Here is the data size from the datadir: TokuDB -rw-rw---- 1 mysql mysql 12M Feb 17 14:48 ibdata1 -rw-rw---- 1 mysql mysql 2.9G Feb 17 12:30 _sbtest_sbtest1_key_pad_25fbaee_3_1d_B_0.tokudb -rw-rw---- 1 mysql mysql 3.1G Feb 17 12:27 _sbtest_sbtest1_main_11_1_1d_B_0.tokudb InnoDB -rw-rw---- 1 mysql mysql 332M Feb 17 14:48 ibdata1 -rw-rw---- 1 mysql mysql 2.3G Feb 17 12:38 sbtest1.ibd
The Georgia (343 rep)
Feb 17, 2016, 05:55 AM • Last activity: May 3, 2025, 12:00 PM
2 votes
1 answers
1074 views
Any "faster" methods of backing up databases in mariadb?
I am coming from the Microsoft SQL Server world of databases, and have been there for about 7 years. My new role is solely based in various open source database engines. As I have been prepping for some migrations to AWS RDS and/or newer versions of MariaDB, I have come up on an issue regarding a ba...
I am coming from the Microsoft SQL Server world of databases, and have been there for about 7 years. My new role is solely based in various open source database engines. As I have been prepping for some migrations to AWS RDS and/or newer versions of MariaDB, I have come up on an issue regarding a backup strategy for my company. ATM, and to spare the sensitive details, there isn't any solution in place. The databases that are the biggest concern are all running on older TokuDB storage engines, and are the ones in question that will need to be migrated. Backing up these databases has been done mainly through the following method instead of piping the whole database into one .sql file mysqldump -u root -p -t -T/$source $database $databasetable --fields-terminated-by='|' --lines-terminated-by='\n' --order-by-primary I could simply just say to backup the whole db and have it spit out each table one at a time, but regardless, this takes several hours. Doing a plain sql dump, whether that is using gzip or not, takes 24+ hours, if not more. This is clearly not the best for having a standard backup strategy like you could find with Ola Hallengren's Maintenance Plan for Microsoft SQL Server.However, the only alternative I could think of is writing something in python or using a cronjob to execute the backups on a schedule that has yet to be determined. The only solution I found was Percona's XtraBackup, but wanted to see if the community had other ideas. **More background:** These are database servers running as EC2 instances in AWS, on Ubuntu and are provisioned as m4.large or m4.xlarge. There are several replica's of the primary, so the backup would ideally be done ON the primary, not the replica **Final mentions** Yes-migrating to RDS is an option, which would eliminate this issue with automated backups. However, the migration of these older Tokudb servers is not going to be "that" easy, so it may be some time before we get there. I would appreciate any input or suggestions. Thank you.
Randoneering (135 rep)
Dec 9, 2022, 10:34 PM • Last activity: Dec 9, 2022, 10:47 PM
0 votes
0 answers
49 views
Performance drop on INSERT INTO SELECT FROM
We have a "system" with a website. When data is added (never updated old data), we create large "temp_data" table fill it with the new data - couple of million rows, then insert into the "main_data" table: insert into main_data select * from temp_data; main_data is toku_db engine, temp_data is myisa...
We have a "system" with a website. When data is added (never updated old data), we create large "temp_data" table fill it with the new data - couple of million rows, then insert into the "main_data" table: insert into main_data select * from temp_data; main_data is toku_db engine, temp_data is myisam engine. Often during the update, the website stop responding until data is fully inserted: SQLSTATE[HY000] Connection refused and / or SQLSTATE[HY000] MySQL server has gone away Any way I can do the insert "slower". Insert is always new data, it never updates anything.
Nick (141 rep)
Dec 15, 2021, 09:28 AM
0 votes
1 answers
100 views
handler socket on tokudb engine
I am trying to setup handler socket to work with tokudb engine. Database I am running is percona server and i have got tokudb enabled and the tables all have tokudb engine. I was wondering if anyone knows handler socket is compatible with tokudb aswell. Any advise on this would be appreciated. Thank...
I am trying to setup handler socket to work with tokudb engine. Database I am running is percona server and i have got tokudb enabled and the tables all have tokudb engine. I was wondering if anyone knows handler socket is compatible with tokudb aswell. Any advise on this would be appreciated. Thanks
roger moore (367 rep)
Sep 8, 2015, 07:15 AM • Last activity: Apr 4, 2021, 01:05 PM
0 votes
0 answers
119 views
Percona: Analyze table doesn't work
I have a Percona 5.6.28 servers with a=some TokuDB tables. The biggest table has ~120 million rows. Statics keeps changeing in the past hald year: - At first. Analyze table with recount model toke accurate statics back - Recenly. The statics were updated to the same value including some column as ge...
I have a Percona 5.6.28 servers with a=some TokuDB tables. The biggest table has ~120 million rows. Statics keeps changeing in the past hald year: - At first. Analyze table with recount model toke accurate statics back - Recenly. The statics were updated to the same value including some column as gender. I have tried with below ways but don't work: - Analyze table - Drop some index and re-create - Optimize table & Analyze table I found a lot people had get the same problem before. Can anyone share some experience?
Zak (1 rep)
Aug 6, 2019, 06:17 AM
5 votes
1 answers
25843 views
How do I add a partition to an existing table in mariadb / mysql?
I have the following table. I want to add partitions too. CREATE TABLE `app_log_Test` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `dateCreated` datetime NOT NULL, `host` varchar(512) COLLATE utf8mb4_unicode_ci DEFAULT NULL, `label` varchar(32) COLLATE utf8mb4_unicode_ci DEFAULT NULL, `event` varchar(...
I have the following table. I want to add partitions too. CREATE TABLE app_log_Test ( id bigint(20) NOT NULL AUTO_INCREMENT, dateCreated datetime NOT NULL, host varchar(512) COLLATE utf8mb4_unicode_ci DEFAULT NULL, label varchar(32) COLLATE utf8mb4_unicode_ci DEFAULT NULL, event varchar(32) COLLATE utf8mb4_unicode_ci DEFAULT NULL, level varchar(8) COLLATE utf8mb4_unicode_ci DEFAULT NULL, message text COLLATE utf8mb4_unicode_ci, version bigint(20) NOT NULL DEFAULT '0', PRIMARY KEY (id), KEY app_log_dateCreated (dateCreated), KEY app_log_label (label), KEY app_log_event (event), KEY app_log_level (level) ) ENGINE=TokuDB COMPRESSION=tokudb_quicklz I am using MariaDB 10. MariaDB [test2]> alter table app_log_Test partition by RANGE(TO_DAYS(dateCreated))( -> PARTITION p_201809 VALUES LESS THAN (TO_DAYS('2018-09-01 00:00:00')) ENGINE = TokuDB, -> PARTITION p_201810 VALUES LESS THAN (TO_DAYS('2018-10-01 00:00:00')) ENGINE = TokuDB); I get the following error ERROR 1503 (HY000): A PRIMARY KEY must include all columns in the table's partitioning function
nelaaro (767 rep)
Oct 10, 2018, 11:49 AM • Last activity: Oct 22, 2018, 12:51 PM
3 votes
0 answers
263 views
TokuDB/Percona - Insert Performance worse as table grows
We have a bunch of existing servers which run just fine, inserting our thousand rows/second, but the new batch just will not do it. As the day goes on and the table grows, it gets slower and slower, until we are millions of rows behind. I've tried to match all the settings, but still no juice. tokud...
We have a bunch of existing servers which run just fine, inserting our thousand rows/second, but the new batch just will not do it. As the day goes on and the table grows, it gets slower and slower, until we are millions of rows behind. I've tried to match all the settings, but still no juice. tokudb_commit_sync=OFF; tokudb_block_size=1048576; tokudb_analyze_in_background = OFF; tokudb_auto_analyze=0; NOTE the current tables on the new servers were created with a 4MB block size, though I don't think that would cause this kind of performance degradation. Old [working] servers (on worse but similar hardware)
    Linux 2.6.32-504.30.3.el6.x86_64 CentOS release 6.6 (Final)
    tokudb_version: 5.6.36-82.0
    mysql  Ver 14.14 Distrib 5.6.36-82.0, for Linux (x86_64) using  6.0
New [fubar'd] servers:
    Linux 3.10.0-862.3.2.el7.x86_64 CentOS Linux release 7.5.1804 (Core)
    tokudb_version: 5.7.21-21
    mysql  Ver 14.14 Distrib 5.7.21-21, for Linux (x86_64) using  6.2
I think I may be onto something with this, however, when I do a SHOW INDEX, the old servers have cardinality equal to the number of rows in the table for ALL indices, while the new servers have varying cardinality, sometimes small, dependent on the cardinality (as expected). (Both tables are identical, aside from the data) So it seems the new servers are getting caught up in some sort of index optimization on each new write (LOAD DATA LOCAL btw), while the old servers are able to insert much faster because they don't do as much? Please help me I've been all over google and in logs and code for 16 hours today and I have no more ideas. The goal is to make the new servers have INSERT throughput like the Old servers.
user1576419 (31 rep)
Jun 2, 2018, 02:33 AM • Last activity: Jun 2, 2018, 02:38 AM
0 votes
2 answers
305 views
question about using tokuDB or InnoDB for financial data
we want to use Percona xtraDBcluster for clustering 4 nodes (Master-Master) and we have a finance project which should have a rather large database. The number of tables is not more than 40, but we have about 2 tables which store financial transaction data and the estimated amount of data in these t...
we want to use Percona xtraDBcluster for clustering 4 nodes (Master-Master) and we have a finance project which should have a rather large database. The number of tables is not more than 40, but we have about 2 tables which store financial transaction data and the estimated amount of data in these tables is going to grow at 350,000 records per day, and we should keep them at least 10 years to be able to do various reports. Most of the operation (98%) in these tables is Insert/read but we can separate updatable fields in a separate table. My questions are the following: - Should I use a tokuDB storage engine for such large amounts of data or InnoDB? - What is the best solution for such large and sensitive database? Best Regards Ali
Ali Fattahi (1 rep)
Apr 11, 2018, 09:58 AM • Last activity: Apr 17, 2018, 07:13 PM
0 votes
1 answers
125 views
how can I get mariadb tokudb to put all its files in directories per database
Is there a way to get tokudb to put all the files it creates for a database into subdirectories. When I started using tokudb the most irritating thing I have found is that it puts all its files into the MySQL root database directory. This makes it difficult to see which files belong to which databas...
Is there a way to get tokudb to put all the files it creates for a database into subdirectories. When I started using tokudb the most irritating thing I have found is that it puts all its files into the MySQL root database directory. This makes it difficult to see which files belong to which database and difficult to see how much space each database schema is using. ll /srv/mysql/data/ -rw-rw----. 1 mysql mysql 16384 Apr 5 17:36 aria_log.00000001 -rw-rw----. 1 mysql mysql 52 Apr 5 17:36 aria_log_control drwx------. 2 mysql mysql 4096 Apr 10 15:09 graphite drwx------. 2 mysql mysql 165 Apr 10 15:09 graphite2 -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_account_mygraph_key_account_mygraph_83a0eb3f_3c3_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_account_mygraph_main_3c3_2_1d.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 14:54 _graphite_account_mygraph_status_3c3_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_account_profile_key_user_id_3ce_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_account_profile_main_3ce_2_1d.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 6 15:54 _graphite_account_profile_status_3ce_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_account_variable_key_account_variable_83a0eb3f_3db_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_account_variable_main_3db_2_1d.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 14:54 _graphite_account_variable_status_3db_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_account_view_key_account_view_83a0eb3f_3e6_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_account_view_main_3e6_2_1d.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 14:54 _graphite_account_view_status_3e6_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_account_window_key_account_window_2e1a1398_3f1_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_account_window_main_3f1_2_1d.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 14:54 _graphite_account_window_status_3f1_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_auth_group_key_name_3fc_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_auth_group_main_3fc_2_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_auth_group_permissions_key_auth_group_permissions_0e939a4f_407_4_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_auth_group_permissions_key_auth_group_permissions_8373b171_407_5_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_auth_group_permissions_key_auth_group_permissions_group_id_0cd325b0_uniq_407_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_auth_group_permissions_main_407_2_1d.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 14:54 _graphite_auth_group_permissions_status_407_1_1d.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 14:54 _graphite_auth_group_status_3fc_1_1d.tokudb -rw-rw----. 1 mysql mysql 16896 Apr 6 11:58 _graphite_auth_permission_key_auth_permission_417f1b1c_419_1_1d_B_2.tokudb -rw-rw----. 1 mysql mysql 16896 Apr 6 11:58 _graphite_auth_permission_key_auth_permission_content_type_id_01ab375a_uniq_419_1_1d_B_1.tokudb -rw-rw----. 1 mysql mysql 16896 Apr 6 11:58 _graphite_auth_permission_main_419_1_1d_B_0.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 14:54 _graphite_auth_permission_status_412_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_auth_user_groups_key_auth_user_groups_0e939a4f_42c_5_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_auth_user_groups_key_auth_user_groups_e8701ad4_42c_4_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_auth_user_groups_key_auth_user_groups_user_id_94350c0c_uniq_42c_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_auth_user_groups_main_42c_2_1d.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 14:54 _graphite_auth_user_groups_status_42c_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_auth_user_key_username_41f_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_auth_user_main_41f_2_1d.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 6 15:54 _graphite_auth_user_status_41f_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_auth_user_user_permissions_key_auth_user_user_permissions_8373b171_437_5_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_auth_user_user_permissions_key_auth_user_user_permissions_e8701ad4_437_4_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_auth_user_user_permissions_key_auth_user_user_permissions_user_id_14a6b632_uniq_437_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_auth_user_user_permissions_main_437_2_1d.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 14:54 _graphite_auth_user_user_permissions_status_437_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_dashboard_dashboard_main_442_2_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_dashboard_dashboard_owners_key_dashboard_dashboard_owners_83a0eb3f_44c_5_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_dashboard_dashboard_owners_key_dashboard_dashboard_owners_a6b0b808_44c_4_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_dashboard_dashboard_owners_key_dashboard_dashboard_owners_dashboard_id_f37e04b7_uniq_44c_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_dashboard_dashboard_owners_main_44c_2_1d.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 14:54 _graphite_dashboard_dashboard_owners_status_44c_1_1d.tokudb -rw-rw----. 1 mysql mysql 16384 Apr 6 11:58 _graphite_dashboard_dashboard_status_442_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_dashboard_template_main_457_2_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_dashboard_template_owners_key_dashboard_template_owners_74f53564_461_4_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_dashboard_template_owners_key_dashboard_template_owners_83a0eb3f_461_5_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_dashboard_template_owners_key_dashboard_template_owners_template_id_e47a75a7_uniq_461_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_dashboard_template_owners_main_461_2_1d.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 14:54 _graphite_dashboard_template_owners_status_461_1_1d.tokudb -rw-rw----. 1 mysql mysql 16384 Apr 6 11:58 _graphite_dashboard_template_status_457_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_django_admin_log_key_django_admin_log_417f1b1c_46c_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_django_admin_log_key_django_admin_log_e8701ad4_46c_4_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_django_admin_log_main_46c_2_1d.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 14:54 _graphite_django_admin_log_status_46c_1_1d.tokudb -rw-rw----. 1 mysql mysql 16896 Apr 6 11:58 _graphite_django_content_type_key_django_content_type_app_label_76bd3d3b_uniq_47e_1_1d_B_1.tokudb -rw-rw----. 1 mysql mysql 16896 Apr 6 11:58 _graphite_django_content_type_main_47e_1_1d_B_0.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 14:54 _graphite_django_content_type_status_477_1_1d.tokudb -rw-rw----. 1 mysql mysql 16896 Apr 6 11:58 _graphite_django_migrations_main_48b_1_1d_B_0.tokudb -rw-rw----. 1 mysql mysql 16384 Apr 6 11:58 _graphite_django_migrations_status_484_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_django_session_key_django_session_de54fa62_491_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_django_session_main_491_2_1d.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 14:54 _graphite_django_session_status_491_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_events_event_main_49b_2_1d.tokudb -rw-rw----. 1 mysql mysql 16384 Apr 6 11:58 _graphite_events_event_status_49b_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_tagging_taggeditem_key_tagging_taggeditem_417f1b1c_4b1_5_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_tagging_taggeditem_key_tagging_taggeditem_76f094bc_4b1_6_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_tagging_taggeditem_key_tagging_taggeditem_af31437c_4b1_4_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_tagging_taggeditem_key_tagging_taggeditem_tag_id_3d53f09d_uniq_4b1_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_tagging_taggeditem_main_4b1_2_1d.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 14:54 _graphite_tagging_taggeditem_status_4b1_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_tagging_tag_key_name_4a6_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_tagging_tag_main_4a6_2_1d.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 14:54 _graphite_tagging_tag_status_4a6_1_1d.tokudb -rw-rw----. 1 mysql mysql 6160384 Apr 10 15:06 _graphite_tags_series_key_hash_4c3_1_1d_B_1.tokudb -rw-rw----. 1 mysql mysql 7929856 Apr 10 15:06 _graphite_tags_series_main_4c3_1_1d_B_0.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 15:06 _graphite_tags_series_status_4bc_1_1d.tokudb -rw-rw----. 1 mysql mysql 442368 Apr 10 15:06 _graphite_tags_seriestag_key_tags_seriestag_76f094bc_4d6_1_1d_B_3.tokudb -rw-rw----. 1 mysql mysql 507904 Apr 10 15:06 _graphite_tags_seriestag_key_tags_seriestag_a08cee2d_4d6_1_1d_B_2.tokudb -rw-rw----. 1 mysql mysql 507904 Apr 10 15:06 _graphite_tags_seriestag_key_tags_seriestag_b0304493_4d6_1_1d_B_4.tokudb -rw-rw----. 1 mysql mysql 524288 Apr 10 15:06 _graphite_tags_seriestag_key_tags_seriestag_series_id_ad31c493_uniq_4d6_1_1d_B_1.tokudb -rw-rw----. 1 mysql mysql 901120 Apr 10 15:06 _graphite_tags_seriestag_main_4d6_1_1d_B_0.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 15:06 _graphite_tags_seriestag_status_4cf_1_1d.tokudb -rw-rw----. 1 mysql mysql 1179648 Apr 10 15:06 _graphite_tags_tagvalue_key_value_4f0_1_1d_B_1.tokudb -rw-rw----. 1 mysql mysql 1736704 Apr 10 15:06 _graphite_tags_tagvalue_main_4f0_1_1d_B_0.tokudb -rw-rw----. 1 mysql mysql 65536 Apr 10 15:06 _graphite_tags_tagvalue_status_4e9_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _graphite_url_shortener_link_main_4f8_2_1d.tokudb -rw-rw----. 1 mysql mysql 16384 Apr 6 11:58 _graphite_url_shortener_link_status_4f8_1_1d.tokudb -rw-rw----. 1 mysql mysql 79691776 Apr 10 02:38 ibdata1 -rw-rw----. 1 mysql mysql 50331648 Apr 10 02:38 ib_logfile0 -rw-rw----. 1 mysql mysql 50331648 Apr 10 02:36 ib_logfile1 drwx------. 2 mysql mysql 4096 Apr 6 11:58 kanboard -rw-rw----. 1 mysql mysql 16896 Apr 6 11:58 _kanboard_settings_main_549_1_1d_B_0.tokudb -rw-rw----. 1 mysql mysql 16384 Apr 6 11:58 _kanboard_settings_status_543_1_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _kanboard_user_has_notifications_key_user_has_notifications_ibfk_2_569_3_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _kanboard_user_has_notifications_main_569_2_1d.tokudb -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 _kanboard_user_has_notifications_status_569_1_1d.tokudb -rw-------. 1 mysql mysql 26236353 Apr 10 15:09 log000000000006.tokulog29 drwx------. 2 mysql mysql 119 Apr 6 11:58 maxscale_schema -rw-rw----. 1 mysql mysql 0 Apr 5 17:39 multi-master.info drwx------. 2 mysql root 4096 Apr 6 09:59 mysql drwx------. 2 mysql mysql 20 Apr 5 17:36 performance_schema -rw-rw----. 1 mysql mysql 6 Apr 5 17:39 platformdb-isa-01.pid -rw-rw----. 1 mysql mysql 32768 Apr 6 11:58 tokudb.directory -rw-rw----. 1 mysql mysql 16384 Apr 5 17:22 tokudb.environment -rw-------. 1 mysql mysql 0 Apr 5 17:22 __tokudb_lock_dont_delete_me_data -rw-------. 1 mysql mysql 0 Apr 5 17:22 __tokudb_lock_dont_delete_me_environment -rw-------. 1 mysql mysql 0 Apr 5 17:22 __tokudb_lock_dont_delete_me_logs -rw-------. 1 mysql mysql 0 Apr 5 17:22 __tokudb_lock_dont_delete_me_recovery -rw-------. 1 mysql mysql 0 Apr 5 17:22 __tokudb_lock_dont_delete_me_temp -rw-rw----. 1 mysql mysql 32768 Apr 10 15:06 tokudb.rollback
nelaaro (767 rep)
Apr 10, 2018, 01:43 PM
0 votes
2 answers
773 views
TokuDB + MariaDB 10: should I use a clustered index
I have a large table (200 Columns.. I know, but it wasn't my design) that will grow fast and I will have a lot of useless rows in it, identified by an IP (a `varchar` 32 column). The table will get hundreds of thousands of rows per day and I will need to delete few thousand rows with a specific IP i...
I have a large table (200 Columns.. I know, but it wasn't my design) that will grow fast and I will have a lot of useless rows in it, identified by an IP (a varchar 32 column). The table will get hundreds of thousands of rows per day and I will need to delete few thousand rows with a specific IP in that column. During the day at regular intervals (5 min perhaps) I will need to select rows and avoid rows with certain IP. The deletions I'll probably do in the evening not to put too much load on the DB. Should I use a clustered index or a regular index on this column? TokuDB claim there's no performance loss unlike InnoDB but still we're talking 200 columns (a lot of them empty to be fair (or null or 0s)). I will need to add more indexes as well on a few other varchar columns as well over which I'll perform selects. Some will have huge cardinality as they're "Timestamps with milliseconds". Will I benefit or suffer from clustered indexes?
Recct (121 rep)
Jan 7, 2015, 04:21 PM • Last activity: Jan 6, 2018, 07:48 PM
1 votes
0 answers
211 views
TokuDB table stats row count decreases very fast while adding/updateing data
We run an Percona server with the newest version(5.7.19-17). After a while the cardinality of multiple tables drop to zero. A new created index also have an cardinality of 0 if the existing are zero. I can repair this by do set session tokudb_analyze_mode = TOKUDB_ANALYZE_RECOUNT_ROWS; ANALYZE table...
We run an Percona server with the newest version(5.7.19-17). After a while the cardinality of multiple tables drop to zero. A new created index also have an cardinality of 0 if the existing are zero. I can repair this by do set session tokudb_analyze_mode = TOKUDB_ANALYZE_RECOUNT_ROWS; ANALYZE table myTable; or ALTER TABLE table_name ENGINE=TokuDB; After some tests I found the row count droping over time. As far as I understand the, this influences also the cardinality. The table contains around 800.000 entries and is written nearly the whole day. CREATE TABLE table_name ( val1 char(2) COLLATE utf8_bin NOT NULL, val2 char(15) COLLATE utf8_bin NOT NULL, val3 char(15) COLLATE utf8_bin NOT NULL, val4 mediumint(8) unsigned NOT NULL, PRIMARY KEY (val1,val2), KEY reverse_key (val1,val3) ) ENGINE=TokuDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin ROW_FORMAT=TOKUDB_LZMA; Mostly I write with REPLACE INTO and INSERT ON DUPLICATE KEY UPDATE into this table and update nearly 100% of the data per day. Both insert methods are tested with the same behavior. I start an simple script which just print the calculated row count over time SELECT TABLE_ROWS FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'shema_name' AND TABLE_NAME = 'table_name'; enter image description here You can see the row count is decreasing very fast. Can anyone explain to me where this behavior comes from and I can handle it?
Sebastian (11 rep)
Nov 13, 2017, 03:07 PM • Last activity: Nov 14, 2017, 10:21 AM
1 votes
1 answers
821 views
Tokudb ROW_FORMAT is not being accepted
I am trying to create some tokudb tables to experiment with the different row format options to compare the compression available. https://www.percona.com/doc/percona-server/5.7/tokudb/using_tokudb.html I have tried all the following TOKUDB_SNAPPY TOKUDB_ZLIB TOKUDB_DEFAULT With no effect. If I just...
I am trying to create some tokudb tables to experiment with the different row format options to compare the compression available. https://www.percona.com/doc/percona-server/5.7/tokudb/using_tokudb.html I have tried all the following TOKUDB_SNAPPY TOKUDB_ZLIB TOKUDB_DEFAULT With no effect. If I just ignore it the tables are created with row_fromat = fixed. MariaDB [eventlog]> show VARIABLES like "%row_format%"; +--------------------------------+-------------+ | Variable_name | Value | +--------------------------------+-------------+ | tokudb_hide_default_row_format | ON | | tokudb_row_format | tokudb_zlib | +--------------------------------+-------------+ MariaDB [eventlog]> CREATE TABLE stable1 ( column_a INT NOT NULL PRIMARY KEY, column_b INT NOT NULL) ENGINE=TokuDB, ROW_FORMAT=TOKUDB_DEFAULT; ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'TOKUDB_DEFAULT' at line 1 MariaDB [eventlog]> CREATE TABLE stable1 ( column_a INT NOT NULL PRIMARY KEY, column_b INT NOT NULL) ROW_FORMAT=tokudb_default; ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'tokudb_default' at line 1 MariaDB [eventlog]> CREATE TABLE stable1 ( column_a INT NOT NULL PRIMARY KEY, column_b INT NOT NULL) ENGINE=TokuDB; Query OK, 0 rows affected (0.09 sec) MariaDB [eventlog]> show table status from eventlog\G; *************************** 1. row *************************** Name: stable1 Engine: TokuDB Version: 10 Row_format: Fixed Rows: 0 Avg_row_length: 0 Data_length: 0 Max_data_length: 9223372036854775807 Index_length: 0 Data_free: 18446744073709551615 Auto_increment: NULL Create_time: 2017-02-20 12:26:18 Update_time: 2017-02-20 12:26:18 Check_time: NULL Collation: latin1_swedish_ci Checksum: NULL Create_options: Comment: MariaDB [eventlog]> ALTER TABLE stable1 ROW_FORMAT=TOKUDB_SNAPPY; ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'TOKUDB_SNAPPY' at line 1 version | 10.1.21-MariaDB tokudb_version | 5.6.34-79.1
nelaaro (767 rep)
Feb 20, 2017, 11:07 AM • Last activity: Oct 19, 2017, 10:21 AM
1 votes
0 answers
890 views
MariaDB memory usage growing
I am using MariaDB 10.2.6 + TokuDB plugin on Debian Stretch. Each storage eninge, InnoDB and TokuDB, is set to a buffer/cache size of 12G. This configuration worked fine with MariaDB 10.1.25 on Debain Jessie. Since upgrading the database, the memory consumption knows no bounds and grows steadily (cu...
I am using MariaDB 10.2.6 + TokuDB plugin on Debian Stretch. Each storage eninge, InnoDB and TokuDB, is set to a buffer/cache size of 12G. This configuration worked fine with MariaDB 10.1.25 on Debain Jessie. Since upgrading the database, the memory consumption knows no bounds and grows steadily (currently 50G resident!). The configuration files did not change in the upgrade process. Some memory measurements: (memory usage did not drop significantly at any point in time) * Tuesday 8AM: Database restart * Wednesday 3PM: VIRT 45.1G, RES 29.2G * Friday 5PM: VIRT 56.5G, RES 42.2G * Saturday 11PM: VIRT 65.8G, RES 51.4G Output of InnoDB/TokuDB Status for each of the last three measurepoints: * SHOW ENGINE INNODB STATUS; https://pastebin.com/raw/04fA1SpX * SHOW ENGINE TOKUDB STATUS; https://pastebin.com/raw/jeRTRyKC Hardware: * Intel Xeon 6C/12T * 128G RAM * Database runs on RAID 1 (2x SSD) ## Config ## cat my.cnf conf.d/* mariadb.conf.d/* [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc_messages_dir = /usr/share/mysql lc_messages = en_US skip-external-locking bind-address = 127.0.0.1 max_connections = 100 connect_timeout = 5 wait_timeout = 600 max_allowed_packet = 16M thread_cache_size = 128 sort_buffer_size = 4M bulk_insert_buffer_size = 16M tmp_table_size = 32M max_heap_table_size = 32M myisam_recover_options = BACKUP key_buffer_size = 128M table_open_cache = 400 myisam_sort_buffer_size = 512M concurrent_insert = 2 read_buffer_size = 2M read_rnd_buffer_size = 1M query_cache_limit = 128K query_cache_size = 64M log_warnings = 2 slow_query_log_file = /var/log/mysql/mariadb-slow.log long_query_time = 10 log_slow_verbosity = query_plan log_bin = /var/log/mysql/mariadb-bin log_bin_index = /var/log/mysql/mariadb-bin.index expire_logs_days = 10 max_binlog_size = 100M default_storage_engine = InnoDB innodb_buffer_pool_size = 256M innodb_log_buffer_size = 8M innodb_file_per_table = 1 innodb_open_files = 400 innodb_io_capacity = 400 innodb_flush_method = O_DIRECT [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/ # conf.d/ [mysqld] wait_timeout = 28800 interactive_timeout = 28800 [mysqld] explicit_defaults_for_timestamp innodb_file_per_table innodb_flush_method = O_DIRECT innodb_log_file_size = 2G innodb_buffer_pool_instances = 2 innodb_buffer_pool_size = 12G innodb_log_buffer_size = 96M innodb_read_io_threads = 4 innodb_write_io_threads = 4 table_open_cache = 650 key_buffer_size = 128M max_allowed_packet = 16M query_cache_type = 1 query_cache_limit = 64M query_cache_size = 256M join_buffer_size = 2M max_connections = 501 slow_query_log = 1 [mysqld] !includedir /etc/mysql/mariadb.conf.d/ [mysqld_safe] skip_log_error syslog [mariadb] plugin-load-add=ha_tokudb.so [mysqld] tokudb_create_index_online = on tokudb_cache_size = 12G tokudb_load_save_space = 1 tokudb_disable_prefetching = on # mariadb.conf.d/ [mariadb] plugin-load-add=ha_tokudb.so ## Global Variables ## SHOW GLOBAL VARIABLES WHERE Variable_name LIKE 'innodb%' OR Variable_name LIKE 'tokudb%' innodb_adaptive_flushing ON innodb_adaptive_flushing_lwm 10.000000 innodb_adaptive_hash_index ON innodb_adaptive_hash_index_partitions 8 innodb_adaptive_hash_index_parts 8 innodb_adaptive_max_sleep_delay 150000 innodb_autoextend_increment 64 innodb_autoinc_lock_mode 1 innodb_background_scrub_data_check_interval 3600 innodb_background_scrub_data_compressed OFF innodb_background_scrub_data_interval 604800 innodb_background_scrub_data_uncompressed OFF innodb_buf_dump_status_frequency 0 innodb_buffer_pool_chunk_size 134217728 innodb_buffer_pool_dump_at_shutdown ON innodb_buffer_pool_dump_now OFF innodb_buffer_pool_dump_pct 25 innodb_buffer_pool_filename ib_buffer_pool innodb_buffer_pool_instances 2 innodb_buffer_pool_load_abort OFF innodb_buffer_pool_load_at_startup ON innodb_buffer_pool_load_now OFF innodb_buffer_pool_populate OFF innodb_buffer_pool_size 12884901888 innodb_change_buffer_max_size 25 innodb_change_buffering all innodb_checksum_algorithm crc32 innodb_checksums ON innodb_cleaner_lsn_age_factor DEPRECATED innodb_cmp_per_index_enabled OFF innodb_commit_concurrency 0 innodb_compression_algorithm zlib innodb_compression_default OFF innodb_compression_failure_threshold_pct 5 innodb_compression_level 6 innodb_compression_pad_pct_max 50 innodb_concurrency_tickets 5000 innodb_corrupt_table_action deprecated innodb_data_file_path ibdata1:12M:autoextend innodb_data_home_dir innodb_deadlock_detect ON innodb_default_encryption_key_id 1 innodb_default_row_format dynamic innodb_defragment OFF innodb_defragment_fill_factor 0.900000 innodb_defragment_fill_factor_n_recs 20 innodb_defragment_frequency 40 innodb_defragment_n_pages 7 innodb_defragment_stats_accuracy 0 innodb_disable_sort_file_cache OFF innodb_disallow_writes OFF innodb_doublewrite ON innodb_empty_free_list_algorithm DEPRECATED innodb_encrypt_log OFF innodb_encrypt_tables OFF innodb_encryption_rotate_key_age 1 innodb_encryption_rotation_iops 100 innodb_encryption_threads 0 innodb_fake_changes OFF innodb_fast_shutdown 1 innodb_fatal_semaphore_wait_threshold 600 innodb_file_format Barracuda innodb_file_format_check ON innodb_file_format_max Barracuda innodb_file_per_table ON innodb_fill_factor 100 innodb_flush_log_at_timeout 1 innodb_flush_log_at_trx_commit 1 innodb_flush_method O_DIRECT innodb_flush_neighbors 1 innodb_flush_sync ON innodb_flushing_avg_loops 30 innodb_force_load_corrupted OFF innodb_force_primary_key OFF innodb_force_recovery 0 innodb_foreground_preflush DEPRECATED innodb_ft_aux_table innodb_ft_cache_size 8000000 innodb_ft_enable_diag_print OFF innodb_ft_enable_stopword ON innodb_ft_max_token_size 84 innodb_ft_min_token_size 3 innodb_ft_num_word_optimize 2000 innodb_ft_result_cache_limit 2000000000 innodb_ft_server_stopword_table innodb_ft_sort_pll_degree 2 innodb_ft_total_cache_size 640000000 innodb_ft_user_stopword_table innodb_idle_flush_pct 100 innodb_immediate_scrub_data_uncompressed OFF innodb_instrument_semaphores OFF innodb_io_capacity 400 innodb_io_capacity_max 2000 innodb_kill_idle_transaction 0 innodb_large_prefix ON innodb_lock_schedule_algorithm vats innodb_lock_wait_timeout 50 innodb_locking_fake_changes OFF innodb_locks_unsafe_for_binlog OFF innodb_log_arch_dir innodb_log_arch_expire_sec 0 innodb_log_archive OFF innodb_log_block_size 0 innodb_log_buffer_size 100663296 innodb_log_checksum_algorithm DEPRECATED innodb_log_checksums ON innodb_log_compressed_pages ON innodb_log_file_size 2147483648 innodb_log_files_in_group 2 innodb_log_group_home_dir ./ innodb_log_write_ahead_size 8192 innodb_lru_scan_depth 1024 innodb_max_bitmap_file_size 0 innodb_max_changed_pages 0 innodb_max_dirty_pages_pct 75.000000 innodb_max_dirty_pages_pct_lwm 0.000000 innodb_max_purge_lag 0 innodb_max_purge_lag_delay 0 innodb_max_undo_log_size 10485760 innodb_mirrored_log_groups 0 innodb_monitor_disable innodb_monitor_enable innodb_monitor_reset innodb_monitor_reset_all innodb_mtflush_threads 8 innodb_old_blocks_pct 37 innodb_old_blocks_time 1000 innodb_online_alter_log_max_size 134217728 innodb_open_files 400 innodb_optimize_fulltext_only OFF innodb_page_cleaners 2 innodb_page_size 16384 innodb_prefix_index_cluster_optimization OFF innodb_print_all_deadlocks OFF innodb_purge_batch_size 300 innodb_purge_rseg_truncate_frequency 128 innodb_purge_threads 4 innodb_random_read_ahead OFF innodb_read_ahead_threshold 56 innodb_read_io_threads 4 innodb_read_only OFF innodb_replication_delay 0 innodb_rollback_on_timeout OFF innodb_rollback_segments 128 innodb_sched_priority_cleaner 0 innodb_scrub_log OFF innodb_scrub_log_speed 256 innodb_show_locks_held 0 innodb_show_verbose_locks 0 innodb_sort_buffer_size 1048576 innodb_spin_wait_delay 6 innodb_stats_auto_recalc ON innodb_stats_include_delete_marked OFF innodb_stats_method nulls_equal innodb_stats_modified_counter 0 innodb_stats_on_metadata OFF innodb_stats_persistent ON innodb_stats_persistent_sample_pages 20 innodb_stats_sample_pages 8 innodb_stats_traditional ON innodb_stats_transient_sample_pages 8 innodb_status_output OFF innodb_status_output_locks OFF innodb_strict_mode ON innodb_support_xa ON innodb_sync_array_size 1 innodb_sync_spin_loops 30 innodb_table_locks ON innodb_temp_data_file_path ibtmp1:12M:autoextend innodb_thread_concurrency 0 innodb_thread_sleep_delay 10000 innodb_tmpdir innodb_track_changed_pages OFF innodb_track_redo_log_now OFF innodb_undo_directory ./ innodb_undo_log_truncate OFF innodb_undo_logs 128 innodb_undo_tablespaces 0 innodb_use_atomic_writes ON innodb_use_fallocate OFF innodb_use_global_flush_log_at_trx_commit OFF innodb_use_mtflush OFF innodb_use_native_aio ON innodb_use_stacktrace OFF innodb_use_trim ON innodb_version 5.7.14 innodb_write_io_threads 4 tokudb_alter_print_error OFF tokudb_analyze_delete_fraction 1.000000 tokudb_analyze_in_background OFF tokudb_analyze_mode TOKUDB_ANALYZE_STANDARD tokudb_analyze_throttle 0 tokudb_analyze_time 5 tokudb_auto_analyze 0 tokudb_block_size 4194304 tokudb_bulk_fetch ON tokudb_cache_size 12884901888 tokudb_cachetable_pool_threads 0 tokudb_cardinality_scale_percent 50 tokudb_check_jemalloc ON tokudb_checkpoint_lock OFF tokudb_checkpoint_on_flush_logs OFF tokudb_checkpoint_pool_threads 0 tokudb_checkpointing_period 60 tokudb_cleaner_iterations 5 tokudb_cleaner_period 1 tokudb_client_pool_threads 0 tokudb_commit_sync ON tokudb_compress_buffers_before_eviction ON tokudb_create_index_online ON tokudb_data_dir tokudb_debug 0 tokudb_dir_per_db OFF tokudb_directio OFF tokudb_disable_hot_alter OFF tokudb_disable_prefetching ON tokudb_disable_slow_alter OFF tokudb_empty_scan rl tokudb_enable_partial_eviction ON tokudb_fanout 16 tokudb_fs_reserve_percent 5 tokudb_fsync_log_period 0 tokudb_hide_default_row_format ON tokudb_killed_time 4000 tokudb_last_lock_timeout tokudb_load_save_space ON tokudb_loader_memory_size 100000000 tokudb_lock_timeout 4000 tokudb_lock_timeout_debug 1 tokudb_log_dir tokudb_max_lock_memory 1610612736 tokudb_optimize_index_fraction 1.000000 tokudb_optimize_index_name tokudb_optimize_throttle 0 tokudb_pk_insert_mode 1 tokudb_prelock_empty ON tokudb_read_block_size 65536 tokudb_read_buf_size 131072 tokudb_read_status_frequency 10000 tokudb_row_format tokudb_zlib tokudb_rpl_check_readonly ON tokudb_rpl_lookup_rows ON tokudb_rpl_lookup_rows_delay 0 tokudb_rpl_unique_checks ON tokudb_rpl_unique_checks_delay 0 tokudb_strip_frm_data OFF tokudb_support_xa ON tokudb_tmp_dir tokudb_version 5.6.35-80.0 tokudb_write_status_frequency 1000 Why is the memory consumption that high? Is there something I missed? Are there any memory profiling tools available for MariaDB, which doesn't kill performance? Sadly MariaDB does not have the sys schema or performance_schema.memory_summary_* tables introduced in MySQL 5.7. I don't think this is connected to this problem, but it's worth mentioning: If I run the mail command-line program, I get this error: /usr/bin/mailx: /usr/lib/x86_64-linux-gnu/libmariadbclient.so.18: no version information available (required by /usr/lib/x86_64-linux-gnu/libmu_auth.so.5) But sending the mail works. This started happending after the Debian and MariaDB upgrade. Thanks in advance
Gabscap (111 rep)
Jul 29, 2017, 11:55 PM • Last activity: Aug 30, 2017, 03:02 PM
1 votes
0 answers
608 views
how do I set the compression level in tokudb
Tokudb has various algorithms, [lmza, zlib, zstd etc][1]. These algorithms have the ability to be configured. In that, you can set the compression level to a value between 1 and 10 depending on the algorithm. How do I do this when creating a table using one of those compression algorithms, https://w...
Tokudb has various algorithms, lmza, zlib, zstd etc . These algorithms have the ability to be configured. In that, you can set the compression level to a value between 1 and 10 depending on the algorithm. How do I do this when creating a table using one of those compression algorithms, https://www.percona.com/doc/percona-server/LATEST/tokudb/using_tokudb.html#tokudb-compression https://www.percona.com/blog/2016/03/09/evaluating-database-compression-methods/ MariaDB [test]> create table t_zlib (x int, y int) compression=tokudb_zlib; MariaDB [test]> show create table t_zlib; CREATE TABLE t_zlib ( x int(11) DEFAULT NULL, y int(11) DEFAULT NULL ) ENGINE=TokuDB DEFAULT CHARSET=latin1 compression=tokudb_zlib
nelaaro (767 rep)
Aug 22, 2017, 09:41 AM
1 votes
1 answers
514 views
Install MariaDB 10 Generic Binaries + TokuDB
I used to install MariaDB from Generic Binaries on a Debian Wheezy 64bits: **mariadb-10.0.13-linux-x86_64.tar.gz** (from downloads.mariadb.org) I would like to use TokuDB as Storage Engine. However, TokuDB is available in "glibc_214" generic linux package as well as .deb packages, but it is NOT avai...
I used to install MariaDB from Generic Binaries on a Debian Wheezy 64bits: **mariadb-10.0.13-linux-x86_64.tar.gz** (from downloads.mariadb.org) I would like to use TokuDB as Storage Engine. However, TokuDB is available in "glibc_214" generic linux package as well as .deb packages, but it is NOT available in "non glibc_214 generic linux package"... I can't use "mariadb-10.0.13-linux-glibc_214-x86_64.tar.gz" because Debian Wheezy glibc is 2.13: $~: ldd --version ldd (Debian EGLIBC 2.13-38+deb7u4) 2.13 Why is TokuDB not available in standard 64bits version of Generic Binaries? Is it safe to copy ha_tokudb.so from an debian wheezy apt install (/usr/lib/mysql/plugin/ha_tokudb.so) to a generic linux "manual" installation (in ./lib/plugin)? If I do so, I can enable TokuDB and all seems to work (I hope so anyway...): (none)=# INSTALL SONAME 'ha_tokudb'; Query OK, 0 rows affected (0.30 sec) (none)=# SHOW ENGINES; +--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+ | Engine | Support | Comment | Transactions | XA | Savepoints | +--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+ ... | TokuDB | YES | Tokutek TokuDB Storage Engine with Fractal Tree(tm) Technology | YES | YES | YES | ... +--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+ 11 rows in set (0.00 sec)
Nicolas Payart (2508 rep)
Sep 24, 2014, 04:02 PM • Last activity: Jul 4, 2017, 09:26 PM
0 votes
1 answers
471 views
Tokudb buggy and plugin gone missing causing mysql to stop starting?
I have actually installed percona mysql and trying to move from oracle mysql. I have been using innodb for quite some time. I heard and read a lot of about tokudb being able to support large db etc. So I did try to give it a try. It has been working fine suddenly today I run yum update and it stoppe...
I have actually installed percona mysql and trying to move from oracle mysql. I have been using innodb for quite some time. I heard and read a lot of about tokudb being able to support large db etc. So I did try to give it a try. It has been working fine suddenly today I run yum update and it stopped working. Below is my mysqld.cnf # Percona Server template configuration [mysqld] # # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M # # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid #default-storage-engine=tokudb Initially I made tokudb as my default storage after the update everything is messed and I had comment this line #default-storage-engine=tokudb only then my mysql started. Here is how my updates looks when I ran the yum update. Updating : Percona-Server-shared-compat-57-5.7.18-14.1.el7.x86_64 1/10 Updating : Percona-Server-shared-57-5.7.18-14.1.el7.x86_64 2/10 Updating : Percona-Server-client-57-5.7.18-14.1.el7.x86_64 3/10 Updating : Percona-Server-server-57-5.7.18-14.1.el7.x86_64 4/10 ------------- * The suggested mysql options and settings are in /etc/percona-server.conf.d/mysqld.cnf * If you want to use mysqld.cnf as default configuration file please make backup of /etc/my.cnf * Once it is done please execute the following commands: rm -rf /etc/my.cnf update-alternatives --install /etc/my.cnf my.cnf "/etc/percona-server.cnf" 200 ------------- Percona Server is distributed with several useful UDF (User Defined Function) from Percona Toolkit. Run the following commands to create these functions: mysql -e "CREATE FUNCTION fnv1a_64 RETURNS INTEGER SONAME 'libfnv1a_udf.so'" mysql -e "CREATE FUNCTION fnv_64 RETURNS INTEGER SONAME 'libfnv_udf.so'" mysql -e "CREATE FUNCTION murmur_hash RETURNS INTEGER SONAME 'libmurmur_udf.so'" See http://www.percona.com/doc/percona-server/5.7/management/udf_percona_toolkit.html for more details Updating : Percona-Server-tokudb-57-5.7.18-14.1.el7.x86_64 5/10 Cleanup : Percona-Server-tokudb-57-5.7.17-13.1.el7.x86_64 6/10 Cleanup : Percona-Server-server-57-5.7.17-13.1.el7.x86_64 7/10 Cleanup : Percona-Server-client-57-5.7.17-13.1.el7.x86_64 8/10 Cleanup : Percona-Server-shared-57-5.7.17-13.1.el7.x86_64 9/10 Cleanup : Percona-Server-shared-compat-57-5.7.17-13.1.el7.x86_64 10/10 Verifying : Percona-Server-tokudb-57-5.7.18-14.1.el7.x86_64 1/10 Verifying : Percona-Server-shared-57-5.7.18-14.1.el7.x86_64 2/10 Verifying : Percona-Server-server-57-5.7.18-14.1.el7.x86_64 3/10 Verifying : Percona-Server-shared-compat-57-5.7.18-14.1.el7.x86_64 4/10 Verifying : Percona-Server-client-57-5.7.18-14.1.el7.x86_64 5/10 Verifying : Percona-Server-server-57-5.7.17-13.1.el7.x86_64 6/10 Verifying : Percona-Server-shared-compat-57-5.7.17-13.1.el7.x86_64 7/10 Verifying : Percona-Server-tokudb-57-5.7.17-13.1.el7.x86_64 8/10 Verifying : Percona-Server-shared-57-5.7.17-13.1.el7.x86_64 9/10 Verifying : Percona-Server-client-57-5.7.17-13.1.el7.x86_64 10/10 Updated: Percona-Server-client-57.x86_64 0:5.7.18-14.1.el7 Percona-Server-server-57.x86_64 0:5.7.18-14.1.el7 Percona-Server-shared-57.x86_64 0:5.7.18-14.1.el7 Percona-Server-shared-compat-57.x86_64 0:5.7.18-14.1.el7 Percona-Server-tokudb-57.x86_64 0:5.7.18-14.1.el7 Complete! Here is my current mysql.log. 2017-05-15T06:03:04.579874Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2017-05-15T06:03:04.581301Z 0 [Note] /usr/sbin/mysqld (mysqld 5.7.18-14) starting as process 2454 ... 2017-05-15T06:03:04.585394Z 0 [Note] InnoDB: PUNCH HOLE support available 2017-05-15T06:03:04.585425Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2017-05-15T06:03:04.585433Z 0 [Note] InnoDB: Uses event mutexes 2017-05-15T06:03:04.585440Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier 2017-05-15T06:03:04.585449Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.7 2017-05-15T06:03:04.585460Z 0 [Note] InnoDB: Using Linux native AIO 2017-05-15T06:03:04.585733Z 0 [Note] InnoDB: Number of pools: 1 2017-05-15T06:03:04.585851Z 0 [Note] InnoDB: Using CPU crc32 instructions 2017-05-15T06:03:04.587208Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M 2017-05-15T06:03:04.591145Z 0 [Note] InnoDB: Completed initialization of buffer pool 2017-05-15T06:03:04.592947Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority(). 2017-05-15T06:03:04.602996Z 0 [Note] InnoDB: The first innodb_system data file 'ibdata1' did not exist. A new tablespace will be created! 2017-05-15T06:03:04.603164Z 0 [Note] InnoDB: Setting file './ibdata1' size to 12 MB. Physically writing the file full; Please wait ... 2017-05-15T06:03:04.762580Z 0 [Note] InnoDB: File './ibdata1' size is now 12 MB. 2017-05-15T06:03:04.763040Z 0 [Note] InnoDB: Setting log file ./ib_logfile101 size to 48 MB 2017-05-15T06:03:05.221084Z 0 [Note] InnoDB: Setting log file ./ib_logfile1 size to 48 MB 2017-05-15T06:03:05.862770Z 0 [Note] InnoDB: Created parallel doublewrite buffer at /var/lib/mysql/xb_doublewrite, size 3932160 bytes 2017-05-15T06:03:05.946047Z 0 [Note] InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0 2017-05-15T06:03:05.946134Z 0 [Warning] InnoDB: New log files created, LSN=45790 2017-05-15T06:03:05.946160Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables 2017-05-15T06:03:05.946222Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... 2017-05-15T06:03:06.096069Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB. 2017-05-15T06:03:06.096267Z 0 [Note] InnoDB: Doublewrite buffer not found: creating new 2017-05-15T06:03:06.296244Z 0 [Note] InnoDB: Doublewrite buffer created 2017-05-15T06:03:06.310384Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active. 2017-05-15T06:03:06.310410Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active. 2017-05-15T06:03:06.310601Z 0 [Warning] InnoDB: Creating foreign key constraint system tables. 2017-05-15T06:03:06.354573Z 0 [Note] InnoDB: Foreign key constraint system tables created 2017-05-15T06:03:06.354637Z 0 [Note] InnoDB: Creating tablespace and datafile system tables. 2017-05-15T06:03:06.387884Z 0 [Note] InnoDB: Tablespace and datafile system tables created. 2017-05-15T06:03:06.387937Z 0 [Note] InnoDB: Creating sys_virtual system tables. 2017-05-15T06:03:06.421245Z 0 [Note] InnoDB: sys_virtual table created 2017-05-15T06:03:06.421303Z 0 [Note] InnoDB: Creating zip_dict and zip_dict_cols system tables. 2017-05-15T06:03:06.454581Z 0 [Note] InnoDB: zip_dict and zip_dict_cols system tables created. 2017-05-15T06:03:06.454793Z 0 [Note] InnoDB: Waiting for purge to start 2017-05-15T06:03:06.504972Z 0 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.7.18-14 started; log sequence number 0 2017-05-15T06:03:06.505428Z 0 [Note] Plugin 'FEDERATED' is disabled. 2017-05-15T06:03:06.510037Z 0 [Warning] InnoDB: Cannot open table mysql/plugin from the internal data dictionary of InnoDB though the .frm file for the table exists. Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting.html for how to resolve the issue. mysqld: Table 'mysql.plugin' doesn't exist 2017-05-15T06:03:06.510134Z 0 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 2017-05-15T06:03:06.511046Z 0 [Warning] InnoDB: Cannot open table mysql/gtid_executed from the internal data dictionary of InnoDB though the .frm file for the table exists. Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting.html for how to resolve the issue. mysqld: Table 'mysql.gtid_executed' doesn't exist 2017-05-15T06:03:06.511091Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened. 2017-05-15T06:03:06.514560Z 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them. 2017-05-15T06:03:06.514581Z 0 [Note] Skipping generation of SSL certificates as certificate files are present in data directory. 2017-05-15T06:03:06.536783Z 0 [Warning] CA certificate ca.pem is self signed. 2017-05-15T06:03:06.536866Z 0 [Note] Skipping generation of RSA key pair as key files are present in data directory. 2017-05-15T06:03:06.537595Z 0 [Note] Server hostname (bind-address): '*'; port: 3306 2017-05-15T06:03:06.537649Z 0 [Note] IPv6 is available. 2017-05-15T06:03:06.537671Z 0 [Note] - '::' resolves to '::'; 2017-05-15T06:03:06.537754Z 0 [Note] Server socket created on IP: '::'. 2017-05-15T06:03:06.571570Z 0 [Warning] InnoDB: Cannot open table mysql/server_cost from the internal data dictionary of InnoDB though the .frm file for the table exists. Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting.html for how to resolve the issue. 2017-05-15T06:03:06.571611Z 0 [Warning] Failed to open optimizer cost constant tables 2017-05-15T06:03:06.572621Z 0 [Warning] InnoDB: Cannot open table mysql/time_zone_leap_second from the internal data dictionary of InnoDB though the .frm file for the table exists. Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting.html for how to resolve the issue. 2017-05-15T06:03:06.572655Z 0 [Warning] Can't open and lock time zone table: Table 'mysql.time_zone_leap_second' doesn't exist trying to live without them 2017-05-15T06:03:06.573327Z 0 [Warning] InnoDB: Cannot open table mysql/servers from the internal data dictionary of InnoDB though the .frm file for the table exists. Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting.html for how to resolve the issue. 2017-05-15T06:03:06.573357Z 0 [ERROR] Can't open and lock privilege tables: Table 'mysql.servers' doesn't exist 2017-05-15T06:03:06.582207Z 0 [Note] Event Scheduler: Loaded 0 events 2017-05-15T06:03:06.582530Z 0 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.7.18-14' socket: '/var/lib/mysql/mysql.sock' port: 3306 Percona Server (GPL), Release 14, Revision 2c06f4d 2017-05-15T06:03:06.582552Z 0 [Note] Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine. You may use the startup option '--disable-partition-engine-check' to skip this check. 2017-05-15T06:03:06.582560Z 0 [Note] Beginning of list of non-natively partitioned tables 2017-05-15T06:03:06.606414Z 0 [Note] End of list of non-natively partitioned tables 2017-05-15T06:10:22.125681Z 5 [Warning] InnoDB: Cannot open table mysql/plugin from the internal data dictionary of InnoDB though the .frm file for the table exists. Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting.html for how to resolve the issue. 2017-05-15T06:14:47.492149Z 8 [Warning] InnoDB: Cannot open table mysql/plugin from the internal data dictionary of InnoDB though the .frm file for the table exists. Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting.html for how to resolve the issue. 2017-05-15T06:16:05.397217Z 11 [Warning] InnoDB: Cannot open table mysql/plugin from the internal data dictionary of InnoDB though the .frm file for the table exists. Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting.html for how to resolve the issue. What is the best solution moving forward should I drop tokudb and just continue with innodb or is it a bug with percona itself?
user8012596 (227 rep)
May 15, 2017, 06:30 AM • Last activity: May 20, 2017, 06:46 PM
Showing page 1 of 20 total questions