Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
2 answers
144 views
MariaDB Spider Engine UDF spider_bg_direct_sql can't find TEMPORARY table
I am testing the Spider engine in MariaDB. In version mariadb-10.1.12-linux-x86_64 the UDF works, but there are some issues so I want to upgrade to a newer version. However, mariadb-10.2.18-linux-x86_64,mariadb-10.3.12-linux-x86_64,mariadb-10.3.9-linux-x86_64,mariadb-10.4.1-linux-x86_64 the Spider U...
I am testing the Spider engine in MariaDB. In version mariadb-10.1.12-linux-x86_64 the UDF works, but there are some issues so I want to upgrade to a newer version. However, mariadb-10.2.18-linux-x86_64,mariadb-10.3.12-linux-x86_64,mariadb-10.3.9-linux-x86_64,mariadb-10.4.1-linux-x86_64 the Spider UDF [spider_bg_direct_sql](https://mariadb.com/kb/en/library/spider_bg_direct_sql/) can't find the TEMPORARY table even though I can select that temporary table outside of the UDF. Does anyone know how to solve this? enter image description here enter image description here CREATE TEMPORARY TABLE test.test1( IMSI VARCHAR(20) NOT NULL, DATA_DATE DATE NOT NULL, PU_ID MEDIUMINT(9) NOT NULL DEFAULT 0, DATA_DATE_TS INT(11) NOT NULL, TECH_MASK TINYINT(2) NOT NULL, PRIMARY KEY (IMSI,DATA_DATE,PU_ID,DATA_DATE_TS,TECH_MASK) ) SELECT * FROM test.test1 #this command can find a table SELECT spider_bg_direct_sql('select * from gt_global_imsi.table_imsi_pu ' , 'test.test1' , 'srv "PU1", port "3308"' ) #error msg #Error Code: 12703 #Temporary table 'test.test1' is not found My spider settings: Variable_name Value spider_auto_increment_mode -1 spider_bgs_first_read -1 spider_bgs_mode -1 spider_bgs_second_read -1 spider_bka_engine spider_bka_mode -1 spider_bka_table_name_type -1 spider_block_size 16384 spider_bulk_size -1 spider_bulk_update_mode -1 spider_bulk_update_size -1 spider_casual_read -1 spider_conn_recycle_mode 0 spider_conn_recycle_strict 0 spider_conn_wait_timeout 10 spider_connect_error_interval 1 spider_connect_mutex OFF spider_connect_retry_count 1000 spider_connect_retry_interval 1000 spider_connect_timeout -1 spider_crd_bg_mode -1 spider_crd_interval -1 spider_crd_mode -1 spider_crd_sync -1 spider_crd_type -1 spider_crd_weight -1 spider_delete_all_rows_type -1 spider_direct_dup_insert -1 spider_direct_order_limit -1 spider_dry_access OFF spider_error_read_mode -1 spider_error_write_mode -1 spider_first_read -1 spider_force_commit 1 spider_general_log OFF spider_index_hint_pushdown OFF spider_init_sql_alloc_size -1 spider_internal_limit -1 spider_internal_offset -1 spider_internal_optimize -1 spider_internal_optimize_local -1 spider_internal_sql_log_off -1 spider_internal_unlock OFF spider_internal_xa OFF spider_internal_xa_id_type 0 spider_internal_xa_snapshot 0 spider_load_crd_at_startup -1 spider_load_sts_at_startup -1 spider_local_lock_table OFF spider_lock_exchange OFF spider_log_result_error_with_sql 0 spider_log_result_errors 0 spider_low_mem_read -1 spider_max_connections 0 spider_max_order -1 spider_multi_split_read -1 spider_net_read_timeout -1 spider_net_write_timeout -1 spider_ping_interval_at_trx_start 3600 spider_quick_mode -1 spider_quick_page_size -1 spider_read_only_mode -1 spider_remote_access_charset spider_remote_autocommit -1 spider_remote_default_database spider_remote_sql_log_off -1 spider_remote_time_zone spider_remote_trx_isolation -1 spider_reset_sql_alloc -1 spider_same_server_link OFF spider_second_read -1 spider_select_column_mode -1 spider_selupd_lock_mode -1 spider_semi_split_read -1 spider_semi_split_read_limit -1 spider_semi_table_lock 1 spider_semi_table_lock_connection -1 spider_semi_trx ON spider_semi_trx_isolation -1 spider_skip_default_condition -1 spider_skip_parallel_search -1 spider_split_read -1 spider_store_last_crd -1 spider_store_last_sts -1 spider_sts_bg_mode -1 spider_sts_interval -1 spider_sts_mode -1 spider_sts_sync -1 spider_support_xa ON spider_sync_autocommit ON spider_sync_trx_isolation ON spider_table_crd_thread_count 10 spider_table_init_error_interval 1 spider_table_sts_thread_count 10 spider_udf_ct_bulk_insert_interval -1 spider_udf_ct_bulk_insert_rows -1 spider_udf_ds_bulk_insert_rows -1 spider_udf_ds_table_loop_mode -1 spider_udf_ds_use_real_table -1 spider_udf_table_lock_mutex_count 20 spider_udf_table_mon_mutex_count 20 spider_use_all_conns_snapshot OFF spider_use_consistent_snapshot OFF spider_use_default_database ON spider_use_flash_logs OFF spider_use_handler -1 spider_use_pushdown_udf -1 spider_use_snapshot_with_flush_tables 0 spider_use_table_charset -1 spider_version 3.3.13 spider_xa_register_mode 1
Toby Chen (1 rep)
Jan 24, 2019, 04:53 AM • Last activity: Jul 18, 2025, 12:01 PM
0 votes
0 answers
32 views
MariaDB Spider Tuning
if I have a Spider database which has only Spider tables, what are important configuration keys for tuning the Spider ? For example, i heard about Spider and how it works with MEMORY table so should i increase key_buffer_size in a Spider-only MariaDB ? What about other spider_ parameters ? What are...
if I have a Spider database which has only Spider tables, what are important configuration keys for tuning the Spider ? For example, i heard about Spider and how it works with MEMORY table so should i increase key_buffer_size in a Spider-only MariaDB ? What about other spider_ parameters ? What are the important spider_ variables ? it has over 123 variables !!! My server has 32-Core CPU and 256 GB of RAM.
user216085 (13 rep)
Nov 16, 2024, 05:02 PM • Last activity: Nov 17, 2024, 03:55 AM
2 votes
1 answers
1227 views
Expanding MariaDB cluster (sharding)
I have an expanding MariaDB 3 node cluster and need to expand the available storage. Each node has (to make the maths slightly easier) 120Gb of database storage. As this is replicated, the total DB storage is also therefore 120Gb. If I add a 4th node, I will gain extra resilience and maybe better re...
I have an expanding MariaDB 3 node cluster and need to expand the available storage. Each node has (to make the maths slightly easier) 120Gb of database storage. As this is replicated, the total DB storage is also therefore 120Gb. If I add a 4th node, I will gain extra resilience and maybe better read speed, but still only have 120Gb storage. Server A Server B Server C +-------+ +-------+ +-------+ | 120Gb | | 120Gb | | 120Gb | +-------+ +-------+ +-------+ Cluster1 --> +-------+ +-------+ +-------+ (120Gb) | 120Gb | | 120Gb | | 120Gb | +-------+ +-------+ +-------+ Total storage 120Gb I am looking at Sharding, using SPIDER tables within MariaDB, to increase the available space but without losing my current x3 replication. With SPIDER you can storage a table across multiple databases, so I am thinking I need to move to a model like this, adding a 4th server D also with 120Gb, but effectively running 4 database clusters across these boxes... Server A Server B Server C Server D +-------+ +-------+ +-------+ +-------+ | 120Gb | | 120Gb | | 120Gb | | 120Gb | +-------+ +-------+ +-------+ +-------+ Spider --> +-------+ +-------+ +-------+ +-------+ ( ~0Gb) +-------+ +-------+ +-------+ +-------+ Cluster1 --> +-------+ +-------+ +-------+ ( 40Gb) | 40Gb | | 40Gb | | 40Gb | +-------+ +-------+ +-------+ Cluster2 --> +-------+ +-------+ +-------+ ( 40Gb) | 40Gb | | 40Gb | | 40Gb | +-------+ +-------+ +-------+ Cluster3 --> +-------+ +-------+ +-------+ ( 40Gb) | 40Gb | | 40Gb | | 40Gb | +-------+ +-------+ +-------+ Cluster4 --> +-------+ +-------+ +-------+ ( 40Gb) | 40Gb | | 40Gb | | 40Gb | +-------+ +-------+ +-------+ (total 160Gb) Each cluster (horizonally) would have 3 box replication and storage 40Gb, while each server (vertically) would store up to 120Gb. Each additional box would therefore add one third of 120Gb... This makes sense to me, I have been testing and I can get multiple MariaDB instances running per box. It then just a matter of keeping all the plans in order, organising which slices will be on which boxes and keeping track of port numbers (each MariaDB instance has to have a different port number on). I can even run a thin cluster slice with just the logical Spider tables on the same box too. Is there an easier way? I can find lots of guides for using Spider, but very few for using MaxScale?? Can anyone point me in the right direction please? (I feel like I am having to re-invent too much myself when I'm sure this is not an uncommon situation)
user3566845 (113 rep)
Feb 7, 2016, 11:25 AM • Last activity: Sep 7, 2024, 11:02 PM
0 votes
0 answers
50 views
Result rows duplicated by spider engine
I've two mysql server 'G0' and 'G1'. And there is a one row test table on them respectively. G0: ``` mysql> select * from test.t1; +----+--------+ | id | node | +----+--------+ | 1 | DB-G0 | +----+--------+ 1 rows in set (0.00 sec) ``` G1: ``` mysql> select * from test.t1; +----+--------+ | id | nod...
I've two mysql server 'G0' and 'G1'. And there is a one row test table on them respectively. G0:
mysql> select * from test.t1;
+----+--------+
| id | node   |
+----+--------+
|  1 | DB-G0  |
+----+--------+
1 rows in set (0.00 sec)
G1:
mysql> select * from test.t1;
+----+--------+
| id | node   |
+----+--------+
|  2 | DB-G1  |
+----+--------+
1 rows in set (0.00 sec)
I'm then using the spider engine offered by MariaDB 10.4 (mysqld Ver 10.4.12-MariaDB-1:10.4.12+maria~bionic-log for debian-linux-gnu on x86_64 (mariadb.org binary distribution)) to combine them like this:
CREATE TABLE t1 (
   id BIGINT PRIMARY KEY,
   node TEXT
) ENGINE=Spider
COMMENT='wrapper "mysql"'
PARTITION BY HASH (id) (
 PARTITION par0 COMMENT = 'srv "G0", tbl "t1"',
 PARTITION par1 COMMENT = 'srv "G1", tbl "t1"'
);
It's working well for a simple query like this:
MariaDB [(none)]> select * from test.t1;
+-----+--------+
| id  | node   |
+-----+--------+
|   1 | DB-G0  |
|   2 | DB-G1  |
+-----+--------+
2 rows in set (0.013 sec)
But it doubled the results when using this condition:
MariaDB [(none)]> select * from test.t1 where id != 0;
+-----+--------+
| id  | node   |
+-----+--------+
|   1 | DB-G0  |
|   2 | DB-G1  |
|   1 | DB-G0  |
|   2 | DB-G1  |
+-----+--------+
4 rows in set (0.017 sec)
Here is my Spider related settings:
#
# * Spider
#
spider_bulk_update_mode      = 2
spider_bulk_update_size      = 512K
spider_bulk_size             = 512K
spider_conn_wait_timeout     = 5
spider_connect_timeout       = 3
spider_direct_dup_insert     = 1
spider_direct_order_limit    = 1
spider_multi_split_read      = 1
spider_net_read_timeout      = 5
spider_net_write_timeout     = 5
spider_quick_mode            = 3
spider_quick_page_size       = 1000
spider_remote_trx_isolation  = 1
spider_support_xa            = 0

spider_remote_time_zone        = +00:00
spider_remote_access_charset   = utf8mb4
spider_remote_autocommit       = 1
spider_remote_default_database = 0

spider_general_log          = 1
spider_internal_sql_log_off = 0
spider_log_result_errors    = 4
I commented out all the above settings and tried again, the results are same. Is there something wrong with me or it is a bug?
ASBai (103 rep)
Apr 12, 2020, 12:02 AM
1 votes
0 answers
30 views
What is the meaning of MariaDB `mysql`.`servers` table constraint?
I am using [SPIDER engine][1] to access a remote table from within my MariaDB database, which requires a server definition in `mysql.servers` table created via [`CREATE SERVER`][2] statement. I noticed that this table has a column named `Db` of type `char(64)` with a constraint hitting `information_...
I am using SPIDER engine to access a remote table from within my MariaDB database, which requires a server definition in mysql.servers table created via CREATE SERVER statement. I noticed that this table has a column named Db of type char(64) with a constraint hitting information_schema.SCHEMATA.SCHEMA_NAME field. I was wondering what is the meaning of this, since the database referred by this field might be on a different server? I feel I am missing something but I can't guess what.
beddu (11 rep)
Apr 9, 2020, 11:16 AM
0 votes
0 answers
177 views
MariaDB SPIDER engine
I am trying to include just one table from another MariaDB server (lets call it `A`) into the database on another (even physical) server (lets call it `B`). And I came up with the conclusion that `SPIDER` engine might be a good option. But I can't make it work. Here is what I did so far: **on server...
I am trying to include just one table from another MariaDB server (lets call it A) into the database on another (even physical) server (lets call it B). And I came up with the conclusion that SPIDER engine might be a good option. But I can't make it work. Here is what I did so far: **on server A** - I installed the SPIDER engine even though I don't think it is necessary - Port 3306 is visible from outside - MariaDB is configured to allow connections from everywhere - I've created a user test_user with host set to % - I've created the following table in the test database: CREATE TABLE sss ( id INT PRIMARY KEY AUTO_INCREMENT, name VARCHAR(50) ) ENGINE=InnoDB; **on server B** - I installed the SPIDER engine - I've created the following SERVER: CREATE SERVER dataNode1 FOREIGN DATA WRAPPER mysql OPTIONS ( HOST 'IP-server-A', DATABASE 'test', USER 'test_user', PASSWORD 'passwd', PORT 3306); - Then I created the table as follows: CREATE TABLE sss ( id INT PRIMARY KEY AUTO_INCREMENT, name VARCHAR(50) ) ENGINE=Spider COMMENT='wrapper "mysql", srv "dataNode1", table "sss"'; I manage to connect with test_user from server B to server A: mysql -h IP-server-A -u test_user -ppasswd But when I log with root to MariaDB on server B and I try to access the table sss, I get: #1227 - Access denied; you need (at least one of) the SUPER privilege(s) for this operation What have I missed? Thank you.
Martin Dimitrov (169 rep)
Jul 20, 2019, 06:29 AM
1 votes
0 answers
291 views
MariaDB Spider sharding: automatic XA commit?
I'm planning to use Spider to shard my huge table across multiple backend nodes. What I'd like to do, is inserting on the Spider node(s) on the virtual big table, and Spider have it distribute it for me to the right backend node (shard), while maintaining the isolation level of a simple commit by an...
I'm planning to use Spider to shard my huge table across multiple backend nodes. What I'd like to do, is inserting on the Spider node(s) on the virtual big table, and Spider have it distribute it for me to the right backend node (shard), while maintaining the isolation level of a simple commit by an application. In a picture from the Spider documentation: enter image description here While reading these docs, especially the picture shown above, I assumed that Spider would split up the single commit for me into a two-phase group commit (XA 2PC), sending it to the backend nodes but only fully committing if all shards agree to commit their part. This assumption is also supported by various other statements elsewhere like in [MDEV-14090](https://jira.mariadb.org/browse/MDEV-14090) . > Internally, Spider implements COMMIT by executing XA PREPARE on each shard, and then executing XA COMMIT on each shard. I've tried to test this "simple-COMMIT-to-XA2PC-transaction" assumption using the following setup, and guess what I saw... * mariadb-server version 10.4.5+maria~bionic * mariadb-plugin-spider version 1:10.4.5+maria~bionic
CREATE OR REPLACE SERVER srvA
FOREIGN DATA WRAPPER mysql 
OPTIONS (USER 'spider', PASSWORD 'spider', PORT 3306, HOST '192.168.199.10', DATABASE 'testdb');

CREATE OR REPLACE SERVER srvB
FOREIGN DATA WRAPPER mysql 
OPTIONS (USER 'spider', PASSWORD 'spider', PORT 3306, HOST '192.168.199.11', DATABASE 'testdb');

create table mytable (
  id INT unsigned NOT NULL,
	some_field TEXT(100) DEFAULT NULL, 
	PRIMARY KEY (id),
)
ENGINE=spider
partition by list (mod(id, 2)) (
    partition p00 values in(0)  comment = 'server "srvA", table "mytable_shard0"',
    partition p01 values in(1)  comment = 'server "srvB", table "mytable_shard1"'
);
Everything else on the Spider is left at the default settings. Looking at Wireshark PCAPs, I see this in chronological packet-id order on the single bridge interface all my test nodes are connected on: 1. client connects to Spider node, setting connection parameters, disable autocommit, set transaction level SERIALIZABLE. 1. client doing insert on Spider node: 1. sending START TRANSACTION. 1. large 1000-row single-statement insert spreading shards (unknowingly about shards). awaiting response... 1. Spider communicates with srv1: 1. Spider connect to srv1, setting connection parameters, set transaction level SERIALIZABLE, etc. 1. Spider sends the insert statement of rows for srv1 to srv1. 1. Spider receiving OK from srv1, but in-transaction. 1. Spider communicates with srv2: [omitted] same as with srv1. 1. client receiving OK, but in-transaction. 1. client sending COMMIT. 1. Spider sending COMMIT to srv1, srv1 acks commit 1. Spider sending COMMIT to srv2, srv2 acks commit 1. client receiving OK from Spider, not in-transaction. My observations derived from the above events: * I don't see any XA commit statements happening at all from Spider to backend servers. All seem serial regular commits. Huh!? * All seems serially committed in one-phase shard after shard. * Spider seems to communicate to backend servers one-after-another (3 and 4 in the observation list do not cross in time), not in parallel. Why? It seems slower and unnecessary. (I've double-checked with Wireshark filters mysql.query contains "prepare", mysql.query contains "xa" that no single MySQL query packet contains this.) How can I obtain XA-2PC as depicted in the image from [the documentation](https://mariadb.com/kb/en/library/spider-storage-engine-overview/#sharding-setup) shown above? I need to have strict guarantees that ALL shards have seen and committed their part of the original client-side commit, OR rollback as a group. Also, I'd like to speed up the inserts by having Spider sending the shards their part of the inserts in parallel. Is that possible?
gertvdijk (227 rep)
Jun 13, 2019, 09:46 PM • Last activity: Jun 13, 2019, 11:21 PM
4 votes
1 answers
826 views
MariaDB Spider Engine Out of Memory
I had almost standard MariaDB with InnoDB engine working fine but I wanted to test out the Spider Engine + Sharding/Partitioning. I had a query which was passing fine on the MariaDB InnoDB for 15-20min. With Spider engine no matter what I do it always gets to out of memory. The setup with spider eng...
I had almost standard MariaDB with InnoDB engine working fine but I wanted to test out the Spider Engine + Sharding/Partitioning. I had a query which was passing fine on the MariaDB InnoDB for 15-20min. With Spider engine no matter what I do it always gets to out of memory. The setup with spider engine is that I have 1 spider engine with 2 backends with proper partitioning on both. Where I am mistaking. What should I change in my config. MariaDB [(none)]> show variables like '%spider%'; +---------------------------------------+-----------+ | Variable_name | Value | +---------------------------------------+-----------+ | spider_auto_increment_mode | -1 | | spider_bgs_first_read | -1 | | spider_bgs_mode | 3 | | spider_bgs_second_read | -1 | | spider_bka_engine | | | spider_bka_mode | 1 | | spider_bka_table_name_type | -1 | | spider_block_size | 16384 | | spider_bulk_size | -1 | | spider_bulk_update_mode | -1 | | spider_bulk_update_size | 128000000 | | spider_casual_read | -1 | | spider_conn_recycle_mode | 1 | | spider_conn_recycle_strict | 0 | | spider_connect_mutex | OFF | | spider_connect_retry_count | 1000 | | spider_connect_retry_interval | 1000 | | spider_connect_timeout | 28000 | | spider_crd_bg_mode | -1 | | spider_crd_interval | -1 | | spider_crd_mode | -1 | | spider_crd_sync | -1 | | spider_crd_type | -1 | | spider_crd_weight | -1 | | spider_delete_all_rows_type | -1 | | spider_direct_dup_insert | -1 | | spider_direct_order_limit | 1 | | spider_dry_access | OFF | | spider_error_read_mode | -1 | | spider_error_write_mode | -1 | | spider_first_read | -1 | | spider_force_commit | 1 | | spider_general_log | OFF | | spider_init_sql_alloc_size | -1 | | spider_internal_limit | -1 | | spider_internal_offset | -1 | | spider_internal_optimize | -1 | | spider_internal_optimize_local | -1 | | spider_internal_sql_log_off | OFF | | spider_internal_unlock | OFF | | spider_internal_xa | OFF | | spider_internal_xa_id_type | 0 | | spider_internal_xa_snapshot | 0 | | spider_local_lock_table | OFF | | spider_lock_exchange | OFF | | spider_log_result_error_with_sql | 0 | | spider_log_result_errors | 0 | | spider_low_mem_read | 1 | | spider_max_order | 32767 | | spider_multi_split_read | 1 | | spider_net_read_timeout | 28000 | | spider_net_write_timeout | -1 | | spider_ping_interval_at_trx_start | 3600 | | spider_quick_mode | 3 | | spider_quick_page_size | 8096 | | spider_read_only_mode | -1 | | spider_remote_access_charset | | | spider_remote_autocommit | 1 | | spider_remote_default_database | | | spider_remote_sql_log_off | 0 | | spider_remote_time_zone | | | spider_remote_trx_isolation | -1 | | spider_reset_sql_alloc | -1 | | spider_same_server_link | OFF | | spider_second_read | -1 | | spider_select_column_mode | -1 | | spider_selupd_lock_mode | -1 | | spider_semi_split_read | 8 | | spider_semi_split_read_limit | 8 | | spider_semi_table_lock | 1 | | spider_semi_table_lock_connection | -1 | | spider_semi_trx | ON | | spider_semi_trx_isolation | -1 | | spider_skip_default_condition | -1 | | spider_split_read | -1 | | spider_sts_bg_mode | -1 | | spider_sts_interval | -1 | | spider_sts_mode | -1 | | spider_sts_sync | -1 | | spider_support_xa | ON | | spider_sync_autocommit | ON | | spider_sync_time_zone | OFF | | spider_sync_trx_isolation | ON | | spider_table_init_error_interval | 1 | | spider_udf_ct_bulk_insert_interval | -1 | | spider_udf_ct_bulk_insert_rows | -1 | | spider_udf_ds_bulk_insert_rows | -1 | | spider_udf_ds_table_loop_mode | -1 | | spider_udf_ds_use_real_table | -1 | | spider_udf_table_lock_mutex_count | 20 | | spider_udf_table_mon_mutex_count | 20 | | spider_use_all_conns_snapshot | OFF | | spider_use_consistent_snapshot | OFF | | spider_use_default_database | ON | | spider_use_flash_logs | OFF | | spider_use_handler | -1 | | spider_use_pushdown_udf | -1 | | spider_use_snapshot_with_flush_tables | 0 | | spider_use_table_charset | -1 | | spider_version | 3.2.21 | +---------------------------------------+-----------+
user3526540 (71 rep)
Jul 21, 2015, 08:40 PM • Last activity: Jul 31, 2015, 01:16 PM
Showing page 1 of 8 total questions