Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
276 views
Optimize MariaDB for Drupal 10 to handle more users
I have problems with my Drupal 10/MariaDB 10.5/PHP 8.1 website when they are 20 users simultaniously. The website goes down and the response time is really big (up to 30seconds sometimes) because php-fpm process falls (see locust tests). [![enter image description here][1]][1] I've modified nginx co...
I have problems with my Drupal 10/MariaDB 10.5/PHP 8.1 website when they are 20 users simultaniously. The website goes down and the response time is really big (up to 30seconds sometimes) because php-fpm process falls (see locust tests). enter image description here I've modified nginx config, i see improuvment but still, I can't go more than 30 users without the website becoming KO. Do you have any advice ? MariaDB is the only thing that's common between several of my project that are quite slow when a few users are using the website... alter_algorithm DEFAULT analyze_sample_percentage 100.000000 aria_block_size 8192 aria_checkpoint_interval 30 aria_checkpoint_log_activity 1048576 aria_encrypt_tables OFF aria_force_start_after_recovery_failures 0 aria_group_commit none aria_group_commit_interval 0 aria_log_dir_path /var/lib/mysql/ aria_log_file_size 1073741824 aria_log_purge_type immediate aria_max_sort_file_size 9223372036853727232 aria_page_checksum ON aria_pagecache_age_threshold 300 aria_pagecache_buffer_size 134217728 aria_pagecache_division_limit 100 aria_pagecache_file_hash_size 512 aria_recover_options BACKUP,QUICK aria_repair_threads 1 aria_sort_buffer_size 268434432 aria_stats_method nulls_unequal aria_sync_log_dir NEWFILE aria_used_for_temp_tables ON auto_increment_increment 1 auto_increment_offset 1 autocommit ON automatic_sp_privileges ON back_log 2000 basedir /usr big_tables OFF bind_address 0.0.0.0 binlog_annotate_row_events ON binlog_cache_size 32768 binlog_checksum CRC32 binlog_commit_wait_count 0 binlog_commit_wait_usec 100000 binlog_direct_non_transactional_updates OFF binlog_file_cache_size 16384 binlog_format MIXED binlog_optimize_thread_scheduling ON binlog_row_image FULL binlog_row_metadata NO_LOG binlog_stmt_cache_size 32768 bulk_insert_buffer_size 8388608 character_set_client utf8mb4 character_set_connection utf8mb4 character_set_database utf8mb4 character_set_filesystem binary character_set_results utf8mb4 character_set_server utf8mb4 character_set_system utf8 character_sets_dir /usr/share/mysql/charsets/ check_constraint_checks ON collation_connection utf8mb4_general_ci collation_database utf8mb4_general_ci collation_server utf8mb4_general_ci column_compression_threshold 100 column_compression_zlib_level 6 column_compression_zlib_strategy DEFAULT_STRATEGY column_compression_zlib_wrap OFF completion_type NO_CHAIN concurrent_insert AUTO connect_timeout 10 core_file OFF datadir /var/lib/mysql/ date_format %Y-%m-%d datetime_format %Y-%m-%d %H:%i:%s deadlock_search_depth_long 15 deadlock_search_depth_short 4 deadlock_timeout_long 50000000 deadlock_timeout_short 10000 debug_no_thread_alarm OFF default_password_lifetime 0 default_regex_flags default_storage_engine InnoDB default_tmp_storage_engine default_week_format 0 delay_key_write ON delayed_insert_limit 100 delayed_insert_timeout 300 delayed_queue_size 1000 disconnect_on_expired_password OFF div_precision_increment 4 encrypt_binlog OFF encrypt_tmp_disk_tables OFF encrypt_tmp_files OFF enforce_storage_engine eq_range_index_dive_limit 200 event_scheduler OFF expensive_subquery_limit 100 expire_logs_days 10 explicit_defaults_for_timestamp OFF extra_max_connections 1 extra_port 0 flush OFF flush_time 0 foreign_key_checks ON ft_boolean_syntax + -><()~*:""&| ft_max_word_len 84 ft_min_word_len 4 ft_query_expansion_limit 20 ft_stopword_file (built-in) general_log OFF general_log_file /var/log/mysql/mysql.log group_concat_max_len 1048576 gtid_binlog_pos gtid_binlog_state gtid_cleanup_batch_size 64 gtid_current_pos gtid_domain_id 0 gtid_ignore_duplicates OFF gtid_pos_auto_engines gtid_slave_pos gtid_strict_mode OFF have_compress YES have_crypt YES have_dynamic_loading YES have_geometry YES have_openssl YES have_profiling YES have_query_cache YES have_rtree_keys YES have_ssl DISABLED have_symlink YES histogram_size 254 histogram_type DOUBLE_PREC_HB host_cache_size 1103 hostname vaneau22 idle_readonly_transaction_timeout 0 idle_transaction_timeout 0 idle_write_transaction_timeout 0 ignore_builtin_innodb OFF ignore_db_dirs in_predicate_conversion_threshold 1000 init_connect init_file init_slave innodb_adaptive_flushing ON innodb_adaptive_flushing_lwm 10.000000 innodb_adaptive_hash_index OFF innodb_adaptive_hash_index_parts 8 innodb_adaptive_max_sleep_delay 0 innodb_autoextend_increment 64 innodb_autoinc_lock_mode 1 innodb_background_scrub_data_check_interval 0 innodb_background_scrub_data_compressed OFF innodb_background_scrub_data_interval 0 innodb_background_scrub_data_uncompressed OFF innodb_buf_dump_status_frequency 0 innodb_buffer_pool_chunk_size 134217728 innodb_buffer_pool_dump_at_shutdown ON innodb_buffer_pool_dump_now OFF innodb_buffer_pool_dump_pct 25 innodb_buffer_pool_filename ib_buffer_pool innodb_buffer_pool_instances 1 innodb_buffer_pool_load_abort OFF innodb_buffer_pool_load_at_startup ON innodb_buffer_pool_load_now OFF innodb_buffer_pool_size 6442450944 innodb_change_buffer_max_size 25 innodb_change_buffering none innodb_checksum_algorithm full_crc32 innodb_cmp_per_index_enabled OFF innodb_commit_concurrency 0 innodb_compression_algorithm zlib innodb_compression_default OFF innodb_compression_failure_threshold_pct 5 innodb_compression_level 6 innodb_compression_pad_pct_max 50 innodb_concurrency_tickets 0 innodb_data_file_path ibdata1:12M:autoextend innodb_data_home_dir innodb_deadlock_detect ON innodb_default_encryption_key_id 1 innodb_default_row_format dynamic innodb_defragment OFF innodb_defragment_fill_factor 0.900000 innodb_defragment_fill_factor_n_recs 20 innodb_defragment_frequency 40 innodb_defragment_n_pages 7 innodb_defragment_stats_accuracy 0 innodb_disable_sort_file_cache OFF innodb_doublewrite ON innodb_encrypt_log OFF innodb_encrypt_tables OFF innodb_encrypt_temporary_tables OFF innodb_encryption_rotate_key_age 1 innodb_encryption_rotation_iops 100 innodb_encryption_threads 0 innodb_fast_shutdown 1 innodb_fatal_semaphore_wait_threshold 600 innodb_file_format innodb_file_per_table ON innodb_fill_factor 100 innodb_flush_log_at_timeout 1 innodb_flush_log_at_trx_commit 1 innodb_flush_method fsync innodb_flush_neighbors 1 innodb_flush_sync ON innodb_flushing_avg_loops 30 innodb_force_load_corrupted OFF innodb_force_primary_key OFF innodb_force_recovery 0 innodb_ft_aux_table innodb_ft_cache_size 8000000 innodb_ft_enable_diag_print OFF innodb_ft_enable_stopword ON innodb_ft_max_token_size 84 innodb_ft_min_token_size 3 innodb_ft_num_word_optimize 2000 innodb_ft_result_cache_limit 2000000000 innodb_ft_server_stopword_table innodb_ft_sort_pll_degree 2 innodb_ft_total_cache_size 640000000 innodb_ft_user_stopword_table innodb_immediate_scrub_data_uncompressed OFF innodb_instant_alter_column_allowed add_drop_reorder innodb_io_capacity 200 innodb_io_capacity_max 2000 innodb_large_prefix innodb_lock_schedule_algorithm fcfs innodb_lock_wait_timeout 50 innodb_log_buffer_size 16777216 innodb_log_checksums ON innodb_log_compressed_pages ON innodb_log_file_size 268435456 innodb_log_files_in_group 1 innodb_log_group_home_dir ./ innodb_log_optimize_ddl OFF innodb_log_write_ahead_size 8192 innodb_lru_flush_size 32 innodb_lru_scan_depth 1536 innodb_max_dirty_pages_pct 90.000000 innodb_max_dirty_pages_pct_lwm 0.000000 innodb_max_purge_lag 0 innodb_max_purge_lag_delay 0 innodb_max_purge_lag_wait 4294967295 innodb_max_undo_log_size 10485760 innodb_monitor_disable innodb_monitor_enable innodb_monitor_reset innodb_monitor_reset_all innodb_old_blocks_pct 37 innodb_old_blocks_time 1000 innodb_online_alter_log_max_size 134217728 innodb_open_files 2000 innodb_optimize_fulltext_only OFF innodb_page_cleaners 1 innodb_page_size 16384 innodb_prefix_index_cluster_optimization OFF innodb_print_all_deadlocks OFF innodb_purge_batch_size 300 innodb_purge_rseg_truncate_frequency 128 innodb_purge_threads 4 innodb_random_read_ahead OFF innodb_read_ahead_threshold 56 innodb_read_io_threads 4 innodb_read_only OFF innodb_replication_delay 0 innodb_rollback_on_timeout OFF innodb_scrub_log OFF innodb_scrub_log_speed 256 innodb_sort_buffer_size 1048576 innodb_spin_wait_delay 4 innodb_stats_auto_recalc ON innodb_stats_include_delete_marked OFF innodb_stats_method nulls_equal innodb_stats_modified_counter 0 innodb_stats_on_metadata OFF innodb_stats_persistent ON innodb_stats_persistent_sample_pages 20 innodb_stats_traditional ON innodb_stats_transient_sample_pages 8 innodb_status_output OFF innodb_status_output_locks OFF innodb_strict_mode ON innodb_sync_array_size 1 innodb_sync_spin_loops 30 innodb_table_locks ON innodb_temp_data_file_path ibtmp1:12M:autoextend innodb_thread_concurrency 0 innodb_thread_sleep_delay 0 innodb_tmpdir innodb_undo_directory ./ innodb_undo_log_truncate OFF innodb_undo_logs 128 innodb_undo_tablespaces 0 innodb_use_atomic_writes ON innodb_use_native_aio ON innodb_version 10.5.21 innodb_write_io_threads 4 interactive_timeout 28800 join_buffer_size 262144 join_buffer_space_limit 2097152 join_cache_level 2 keep_files_on_create OFF key_buffer_size 134217728 key_cache_age_threshold 300 key_cache_block_size 1024 key_cache_division_limit 100 key_cache_file_hash_size 512 key_cache_segments 0 large_files_support ON large_page_size 0 large_pages OFF lc_messages en_US lc_messages_dir /usr/share/mysql lc_time_names en_US license GPL local_infile ON lock_wait_timeout 86400 locked_in_memory OFF log_bin OFF log_bin_basename log_bin_compress OFF log_bin_compress_min_len 256 log_bin_index log_bin_trust_function_creators OFF log_disabled_statements sp log_error /var/log/mysql/mysql_error.log log_output FILE log_queries_not_using_indexes OFF log_slave_updates OFF log_slow_admin_statements ON log_slow_disabled_statements sp log_slow_filter admin,filesort,filesort_on_disk,filesort_priority_queue,full_join,full_scan,query_cache,query_cache_miss,tmp_table,tmp_table_on_disk log_slow_rate_limit 1 log_slow_slave_statements ON log_slow_verbosity log_tc_size 24576 log_warnings 2 long_query_time 10.000000 low_priority_updates OFF lower_case_file_system OFF lower_case_table_names 0 master_verify_checksum OFF max_allowed_packet 16777216 max_binlog_cache_size 18446744073709547520 max_binlog_size 1073741824 max_binlog_stmt_cache_size 18446744073709547520 max_connect_errors 100 max_connections 10000 max_delayed_threads 20 max_digest_length 1024 max_error_count 64 max_heap_table_size 16777216 max_insert_delayed_threads 20 max_join_size 18446744073709551615 max_length_for_sort_data 1024 max_password_errors 4294967295 max_prepared_stmt_count 16382 max_recursive_iterations 4294967295 max_relay_log_size 1073741824 max_rowid_filter_size 131072 max_seeks_for_key 4294967295 max_session_mem_used 9223372036854775807 max_sort_length 1024 max_sp_recursion_depth 0 max_statement_time 0.000000 max_tmp_tables 32 max_user_connections 0 max_write_lock_count 4294967295 metadata_locks_cache_size 1024 metadata_locks_hash_instances 8 min_examined_row_limit 0 mrr_buffer_size 262144 myisam_block_size 1024 myisam_data_pointer_size 6 myisam_max_sort_file_size 9223372036853727232 myisam_mmap_size 18446744073709551615 myisam_recover_options BACKUP,QUICK myisam_repair_threads 1 myisam_sort_buffer_size 134216704 myisam_stats_method NULLS_UNEQUAL myisam_use_mmap OFF mysql56_temporal_format ON net_buffer_length 16384 net_read_timeout 30 net_retry_count 10 net_write_timeout 60 old OFF old_alter_table DEFAULT old_mode old_passwords OFF open_files_limit 32768 optimizer_max_sel_arg_weight 32000 optimizer_prune_level 1 optimizer_search_depth 62 optimizer_selectivity_sampling_limit 100 optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=on,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on,split_materialized=on,condition_pushdown_for_subquery=on,rowid_filter=on,condition_pushdown_from_having=on,not_null_range_scan=off optimizer_trace enabled=off optimizer_trace_max_mem_size 1048576 optimizer_use_condition_selectivity 4 performance_schema OFF performance_schema_accounts_size -1 performance_schema_digests_size -1 performance_schema_events_stages_history_long_size -1 performance_schema_events_stages_history_size -1 performance_schema_events_statements_history_long_size -1 performance_schema_events_statements_history_size -1 performance_schema_events_transactions_history_long_size -1 performance_schema_events_transactions_history_size -1 performance_schema_events_waits_history_long_size -1 performance_schema_events_waits_history_size -1 performance_schema_hosts_size -1 performance_schema_max_cond_classes 90 performance_schema_max_cond_instances -1 performance_schema_max_digest_length 1024 performance_schema_max_file_classes 80 performance_schema_max_file_handles 32768 performance_schema_max_file_instances -1 performance_schema_max_index_stat -1 performance_schema_max_memory_classes 320 performance_schema_max_metadata_locks -1 performance_schema_max_mutex_classes 210 performance_schema_max_mutex_instances -1 performance_schema_max_prepared_statements_instances -1 performance_schema_max_program_instances -1 performance_schema_max_rwlock_classes 50 performance_schema_max_rwlock_instances -1 performance_schema_max_socket_classes 10 performance_schema_max_socket_instances -1 performance_schema_max_sql_text_length 1024 performance_schema_max_stage_classes 160 performance_schema_max_statement_classes 222 performance_schema_max_statement_stack 10 performance_schema_max_table_handles -1 performance_schema_max_table_instances -1 performance_schema_max_table_lock_stat -1 performance_schema_max_thread_classes 50 performance_schema_max_thread_instances -1 performance_schema_session_connect_attrs_size -1 performance_schema_setup_actors_size -1 performance_schema_setup_objects_size -1 performance_schema_users_size -1 pid_file /run/mysqld/mysqld.pid plugin_dir /usr/lib/mysql/plugin/ plugin_maturity gamma port 3306 preload_buffer_size 32768 profiling OFF profiling_history_size 15 progress_report_time 5 protocol_version 10 proxy_protocol_networks query_alloc_block_size 16384 query_cache_limit 1048576 query_cache_min_res_unit 4096 query_cache_size 1048576 query_cache_strip_comments OFF query_cache_type OFF query_cache_wlock_invalidate OFF query_prealloc_size 24576 range_alloc_block_size 4096 read_binlog_speed_limit 0 read_buffer_size 131072 read_only OFF read_rnd_buffer_size 262144 relay_log relay_log_basename relay_log_index relay_log_info_file relay-log.info relay_log_purge ON relay_log_recovery OFF relay_log_space_limit 0 replicate_annotate_row_events ON replicate_do_db replicate_do_table replicate_events_marked_for_skip REPLICATE replicate_ignore_db replicate_ignore_table replicate_wild_do_table replicate_wild_ignore_table report_host report_password report_port 3306 report_user require_secure_transport OFF rowid_merge_buff_size 8388608 rpl_semi_sync_master_enabled OFF rpl_semi_sync_master_timeout 10000 rpl_semi_sync_master_trace_level 32 rpl_semi_sync_master_wait_no_slave ON rpl_semi_sync_master_wait_point AFTER_COMMIT rpl_semi_sync_slave_delay_master OFF rpl_semi_sync_slave_enabled OFF rpl_semi_sync_slave_kill_conn_timeout 5 rpl_semi_sync_slave_trace_level 32 secure_auth ON secure_file_priv secure_timestamp NO server_id 1 session_track_schema ON session_track_state_change OFF session_track_system_variables autocommit,character_set_client,character_set_connection,character_set_results,time_zone session_track_transaction_info OFF skip_external_locking ON skip_name_resolve OFF skip_networking OFF skip_show_database OFF slave_compressed_protocol OFF slave_ddl_exec_mode IDEMPOTENT slave_domain_parallel_threads 0 slave_exec_mode STRICT slave_load_tmpdir /tmp slave_max_allowed_packet 1073741824 slave_net_timeout 60 slave_parallel_max_queued 131072 slave_parallel_mode optimistic slave_parallel_threads 0 slave_parallel_workers 0 slave_run_triggers_for_rbr NO slave_skip_errors OFF slave_sql_verify_checksum ON slave_transaction_retries 10 slave_transaction_retry_errors 1158,1159,1160,1161,1205,1213,1429,2013,12701 slave_transaction_retry_interval 0 slave_type_conversions slow_launch_time 2 slow_query_log ON slow_query_log_file vaneau22-slow.log socket /run/mysqld/mysqld.sock sort_buffer_size 2097152 sql_auto_is_null OFF sql_big_selects ON sql_buffer_result OFF sql_if_exists OFF sql_log_bin ON sql_log_off OFF sql_mode STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION sql_notes ON sql_quote_show_create ON sql_safe_updates OFF sql_select_limit 18446744073709551615 sql_slave_skip_counter 0 sql_warnings OFF ssl_ca ssl_capath ssl_cert ssl_cipher ssl_crl ssl_crlpath ssl_key standard_compliant_cte ON storage_engine InnoDB stored_program_cache 256 strict_password_validation ON sync_binlog 0 sync_frm ON sync_master_info 10000 sync_relay_log 10000 sync_relay_log_info 10000 system_time_zone CET system_versioning_alter_history ERROR system_versioning_asof DEFAULT table_definition_cache 400 table_open_cache 2000 table_open_cache_instances 5 tcp_keepalive_interval 0 tcp_keepalive_probes 0 tcp_keepalive_time 0 tcp_nodelay ON thread_cache_size 256 thread_handling one-thread-per-connection thread_pool_dedicated_listener OFF thread_pool_exact_stats OFF thread_pool_idle_timeout 60 thread_pool_max_threads 65536 thread_pool_oversubscribe 3 thread_pool_prio_kickup_timer 1000 thread_pool_priority auto thread_pool_size 2 thread_pool_stall_limit 500 thread_stack 299008 time_format %H:%i:%s time_zone SYSTEM tls_version TLSv1.1,TLSv1.2,TLSv1.3 tmp_disk_table_size 18446744073709551615 tmp_memory_table_size 16777216 tmp_table_size 16777216 tmpdir /tmp transaction_alloc_block_size 8192 transaction_prealloc_size 4096 tx_isolation REPEATABLE-READ tx_read_only OFF unique_checks ON updatable_views_with_limit YES use_stat_tables PREFERABLY_FOR_QUERIES userstat OFF version 10.5.21-MariaDB-0+deb11u1-log version_comment Debian 11 version_compile_machine x86_64 version_compile_os debian-linux-gnu version_malloc_library system version_source_revision bed70468ea08c2820647f5e3ac006a9ff88144ac version_ssl_library OpenSSL 1.1.1w 11 Sep 2023 wait_timeout 28800 wsrep_osu_method TOI wsrep_sr_store table wsrep_auto_increment_control ON wsrep_causal_reads OFF wsrep_certification_rules strict wsrep_certify_nonpk ON wsrep_cluster_address wsrep_cluster_name my_wsrep_cluster wsrep_convert_lock_to_trx OFF wsrep_data_home_dir /var/lib/mysql/ wsrep_dbug_option wsrep_debug NONE wsrep_desync OFF wsrep_dirty_reads OFF wsrep_drupal_282555_workaround OFF wsrep_forced_binlog_format NONE wsrep_gtid_domain_id 0 wsrep_gtid_mode OFF wsrep_ignore_apply_errors 7 wsrep_load_data_splitting OFF wsrep_log_conflicts OFF wsrep_max_ws_rows 0 wsrep_max_ws_size 2147483647 wsrep_mysql_replication_bundle 0 wsrep_node_address wsrep_node_incoming_address AUTO wsrep_node_name vaneau22 wsrep_notify_cmd wsrep_on OFF wsrep_patch_version wsrep_26.22 wsrep_provider none wsrep_provider_options wsrep_recover OFF wsrep_reject_queries NONE wsrep_replicate_myisam OFF wsrep_restart_slave OFF wsrep_retry_autocommit 1 wsrep_slave_fk_checks ON wsrep_slave_uk_checks OFF wsrep_slave_threads 1 wsrep_sst_auth wsrep_sst_donor wsrep_sst_donor_rejects_queries OFF wsrep_sst_method rsync wsrep_sst_receive_address AUTO wsrep_start_position 00000000-0000-0000-0000-000000000000:-1 wsrep_strict_ddl OFF wsrep_sync_wait 0 wsrep_trx_fragment_size 0 wsrep_trx_fragment_unit bytes
mt.i.1 (11 rep)
Feb 21, 2024, 08:56 AM • Last activity: Feb 22, 2024, 02:55 AM
1 votes
1 answers
161 views
Join with certain tables causes execution time to skyrocket
Database Application: Ver 8.0.30 for Linux on x86_64 (MySQL Community Server - GPL) Server: Ubuntu 18.04.6 RAM: 32GB CPUs: 8 core The underlying application framework is Drupal and it uses a query builder UI (Views module) to generate queries for reports. Please find the non performant query below....
Database Application: Ver 8.0.30 for Linux on x86_64 (MySQL Community Server - GPL) Server: Ubuntu 18.04.6 RAM: 32GB CPUs: 8 core The underlying application framework is Drupal and it uses a query builder UI (Views module) to generate queries for reports. Please find the non performant query below. Without the join to the flagging table the query executes under few seconds. I have improved the query based on suggestions by @Rick James and @mustaccio. The query time still exceeds 4 minutes when joined with flagging table.
EXPLAIN SELECT 1 AS expression
FROM
node_field_data node_field_data
LEFT JOIN flagging flagging_node_field_data ON node_field_data.nid = flagging_node_field_data.entity_id AND flagging_node_field_data.flag_id = 'verify_blood_group'
LEFT JOIN node__field_date_of_collection node__field_date_of_collection ON node_field_data.nid = node__field_date_of_collection.entity_id AND node__field_date_of_collection.deleted = '0'
LEFT JOIN node__og_audience node__og_audience ON node_field_data.nid = node__og_audience.entity_id AND (node__og_audience.deleted = '0' AND node__og_audience.langcode = node_field_data.langcode)
WHERE ((node__og_audience.og_audience_target_id IN('30', '229', '5026', '60887', '198081', '350754', '519498', '519499', '566913', '568976', '571016', '642633', '739096', '769874', '770003', '800588', '1051756', '1056092', '1101838', '1465616', '1730929', '2045068', '2269366', '3535017', '1836317', '3387310', '9900000'))) AND ((node_field_data.status = '1') AND (node_field_data.type IN ('donation_record')) AND (node__field_date_of_collection.field_date_of_collection_value BETWEEN '2022-08-27' AND ('2022-09-02' + INTERVAL 1 DAY)));
Please see the Query Explain below.
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: node__field_date_of_collection
   partitions: NULL
         type: range
possible_keys: PRIMARY,field_date_of_collection_value
          key: field_date_of_collection_value
      key_len: 82
          ref: NULL
         rows: 22808
     filtered: 10.00
        Extra: Using where; Using index
*************************** 2. row ***************************
           id: 1
  select_type: SIMPLE
        table: node_field_data
   partitions: NULL
         type: ref
possible_keys: PRIMARY,node__id__default_langcode__langcode,node_field__type__target_id,node__status_type
          key: PRIMARY
      key_len: 4
          ref: ebloodbanking8.node__field_date_of_collection.entity_id
         rows: 1
     filtered: 5.00
        Extra: Using where
*************************** 3. row ***************************
           id: 1
  select_type: SIMPLE
        table: node__og_audience
   partitions: NULL
         type: ref
possible_keys: PRIMARY,og_audience_target_id,og_audience_entityid_deleted_langcode_value
          key: PRIMARY
      key_len: 5
          ref: ebloodbanking8.node__field_date_of_collection.entity_id,const
         rows: 1
     filtered: 10.00
        Extra: Using where
*************************** 4. row ***************************
           id: 1
  select_type: SIMPLE
        table: flagging_node_field_data
   partitions: NULL
         type: ref
possible_keys: flagging_fid_etid,flagging_fid_uid_etid
          key: flagging_fid_etid
      key_len: 34
          ref: const
         rows: 388428
     filtered: 100.00
        Extra: Using where; Using index
Please find the flagging table describe:
| flagging | CREATE TABLE flagging (
  id int unsigned NOT NULL AUTO_INCREMENT,
  flag_id varchar(32) CHARACTER SET ascii NOT NULL COMMENT 'The ID of the target entity.',
  uuid varchar(128) CHARACTER SET ascii NOT NULL,
  entity_type varchar(255) COLLATE utf8mb4_general_ci DEFAULT NULL,
  entity_id varchar(255) COLLATE utf8mb4_general_ci DEFAULT NULL,
  global tinyint DEFAULT NULL,
  uid int unsigned NOT NULL COMMENT 'The ID of the target entity.',
  session_id varchar(255) COLLATE utf8mb4_general_ci DEFAULT NULL,
  created int DEFAULT NULL,
  PRIMARY KEY (id),
  KEY flagging_fid_etid (flag_id,entity_id),
  KEY flagging_fid_uid_etid (flag_id,uid,entity_id),
  KEY flagging_type_fid_etid (entity_type,flag_id,entity_id),
  KEY flagging_type_fid_uid_etid (entity_type,flag_id,uid,entity_id)
) ENGINE=InnoDB AUTO_INCREMENT=2135664 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci COMMENT='The base table for flagging entities.' |
Show create for the table node__field_date_of_collection
| node__field_date_of_collection | CREATE TABLE node__field_date_of_collection (
  bundle varchar(128) CHARACTER SET ascii NOT NULL DEFAULT '' COMMENT 'The field instance bundle to which this row belongs, used when deleting a field instance',
  deleted tinyint NOT NULL DEFAULT '0' COMMENT 'A boolean indicating whether this data item has been deleted',
  entity_id int unsigned NOT NULL COMMENT 'The entity id this data is attached to',
  revision_id int unsigned NOT NULL COMMENT 'The entity revision id this data is attached to',
  langcode varchar(32) CHARACTER SET ascii NOT NULL DEFAULT '' COMMENT 'The language code for this data item.',
  delta int unsigned NOT NULL COMMENT 'The sequence number for this data item, used for multi-value fields',
  field_date_of_collection_value varchar(20) COLLATE utf8mb4_general_ci NOT NULL COMMENT 'The date value.',
  PRIMARY KEY (entity_id,deleted,delta,langcode),
  KEY bundle (bundle),
  KEY revision_id (revision_id),
  KEY field_date_of_collection_value (field_date_of_collection_value)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci COMMENT='Data storage for node field field_date_of_collection.'
Show create for the table node__og_audience
| node__og_audience | CREATE TABLE node__og_audience (
  bundle varchar(128) CHARACTER SET ascii NOT NULL DEFAULT '' COMMENT 'The field instance bundle to which this row belongs, used when deleting a field instance',
  deleted tinyint NOT NULL DEFAULT '0' COMMENT 'A boolean indicating whether this data item has been deleted',
  entity_id int unsigned NOT NULL COMMENT 'The entity id this data is attached to',
  revision_id int unsigned NOT NULL COMMENT 'The entity revision id this data is attached to',
  langcode varchar(32) CHARACTER SET ascii NOT NULL DEFAULT '' COMMENT 'The language code for this data item.',
  delta int unsigned NOT NULL COMMENT 'The sequence number for this data item, used for multi-value fields',
  og_audience_target_id int unsigned NOT NULL COMMENT 'The ID of the target entity.',
  PRIMARY KEY (entity_id,deleted,delta,langcode),
  KEY bundle (bundle),
  KEY revision_id (revision_id),
  KEY og_audience_target_id (og_audience_target_id),
  KEY og_audience_entityid_deleted_langcode_value (entity_id,deleted,langcode,og_audience_target_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci COMMENT='Data storage for node field og_audience.'
Show create for the table node_field_data
| node_field_data | CREATE TABLE node_field_data (
  nid int unsigned NOT NULL,
  vid int unsigned NOT NULL,
  type varchar(32) CHARACTER SET ascii NOT NULL COMMENT 'The ID of the target entity.',
  langcode varchar(12) CHARACTER SET ascii NOT NULL,
  status tinyint NOT NULL,
  uid int unsigned NOT NULL COMMENT 'The ID of the target entity.',
  title varchar(255) COLLATE utf8mb4_general_ci NOT NULL,
  created int NOT NULL,
  changed int NOT NULL,
  promote tinyint NOT NULL,
  sticky tinyint NOT NULL,
  default_langcode tinyint NOT NULL,
  revision_translation_affected tinyint DEFAULT NULL,
  PRIMARY KEY (nid,langcode),
  KEY node__id__default_langcode__langcode (nid,default_langcode,langcode),
  KEY node__vid (vid),
  KEY node_field__type__target_id (type),
  KEY node_field__uid__target_id (uid),
  KEY node_field__created (created),
  KEY node_field__changed (changed),
  KEY node__status_type (status,type,nid),
  KEY node__frontpage (promote,status,sticky,created),
  KEY node__title_type (title(191),type(4))
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci COMMENT='The data table for node entities.'

Please find some of the relevant database variable settings.
[mysqld]
default-storage-engine=InnoDB
join_buffer_size = 8M
read_buffer_size = 4M
sort_buffer_size = 8M
thread_cache_size = 8
interactive_timeout = 60
wait_timeout = 60 # Time in seconds
connect_timeout = 10
max_connect_errors = 10000
tmp_table_size = 32M
max_heap_table_size = 32M

# InnoDB Settings
innodb_buffer_pool_size=18G
innodb_flush_log_at_trx_commit=2
#Set the log file size to about 25% of the buffer pool size
innodb_log_file_size=6G
innodb_log_buffer_size=64M
innodb_flush_method=O_DIRECT
innodb_buffer_pool_instances=8
innodb_stats_on_metadata=0
innodb_lock_wait_timeout=100
innodb_write_io_threads=8
Please share what changes can be made to make this more performant.Indexes have been added to flagging table. Please share monitoring tools that can help us understand the problems better, database global variable changes that can make this query execution faster. Thanks. **Note**: As suggested by Rick James, changing the data type for column **entity_id** in flagging table from varchar to unsigned int resolved the query performance during joins. Thanks.
Amit Sedai (23 rep)
Aug 30, 2022, 01:31 PM • Last activity: Sep 8, 2022, 07:39 AM
0 votes
1 answers
72 views
Slow SELECT in Drupal CMS
I have a slow query from a table with 7000 rows. After a long time trying to optimize the MySQL conf nothing has changed. The query: | 38 | root | localhost | nrj | Query | 28 | Sending data | SELECT node.nid AS nid, workflow_node_current.stamp AS workflow_node_current_stamp, 'node' AS field_data_fi...
I have a slow query from a table with 7000 rows. After a long time trying to optimize the MySQL conf nothing has changed. The query: | 38 | root | localhost | nrj | Query | 28 | Sending data | SELECT node.nid AS nid, workflow_node_current.stamp AS workflow_node_current_stamp, 'node' AS field_data_field_rh_username_node_entity_type, 'node' AS field_data_field_text_no1_node_entity_type, 'node' AS field_data_field_membership_number_node_entity_type, 'node' AS field_data_field_text_1_255_n12_node_entity_type FROM node node LEFT JOIN workflow_node_history workflow_node_current ON (SELECT max(hid) FROM workflow_node_history WHERE nid = node.nid AND sid != old_sid ) = workflow_node_current.hid WHERE (((node.status = '1') AND (node.type IN ('membership_request')) AND (workflow_node_current.sid = '85'))) ORDER BY workflow_node_current_stamp DESC LIMIT 10 OFFSET 0
alireza m (1 rep)
Sep 7, 2015, 10:54 AM • Last activity: Dec 29, 2019, 01:05 AM
-1 votes
1 answers
167 views
Extremely slow requests on small innodb tables
The situation is the following. Two InnoDB tables, one has 2 records, another one - 281 - small ones, insert and update requests may last up to two (!) minutes. Dropped db, created again. The situation repeats in a couple of hours. The tables are history and semaphore from Drupal Commerce (Drupal 7)...
The situation is the following. Two InnoDB tables, one has 2 records, another one - 281 - small ones, insert and update requests may last up to two (!) minutes. Dropped db, created again. The situation repeats in a couple of hours. The tables are history and semaphore from Drupal Commerce (Drupal 7). Innodb settings: innodb_fast_shutdown = 0 innodb_buffer_pool_instances = 4 innodb_buffer_pool_size = 1280M innodb_additional_mem_pool_size = 20M innodb_log_file_size = 256M innodb_log_buffer_size = 4M innodb_flush_log_at_trx_commit = 0 innodb_additional_mem_pool_size = 4M innodb_lock_wait_timeout = 180 Tables: CREATE TABLE history ( uid int(11) NOT NULL DEFAULT '0' COMMENT 'The users.uid that read the node nid.', nid int(11) NOT NULL DEFAULT '0' COMMENT 'The node.nid that was read.', timestamp int(11) NOT NULL DEFAULT '0' COMMENT 'The Unix timestamp at which the read occurred.', PRIMARY KEY (uid,nid), KEY nid (nid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='A record of which users have read which...'; CREATE TABLE semaphore ( name varchar(255) NOT NULL DEFAULT '' COMMENT 'Primary Key: Unique name.', value varchar(255) NOT NULL DEFAULT '' COMMENT 'A value for the semaphore.', expire double NOT NULL COMMENT 'A Unix timestamp with microseconds indicating when the semaphore should expire.', PRIMARY KEY (name), KEY value (value), KEY expire (expire) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Table for holding semaphores, locks, flags, etc. that...'; Upd. show processlist; +------+---------------+---------------------+----------+---------+------+----------+--------------------------------------------------------------------------------------------------+----------+ | Id | User | Host | db | Command | Time | State | Info | Progress | +------+---------------+---------------------+----------+---------+------+----------+--------------------------------------------------------------------------------------------------+----------+ | 0.000 | | 7996 | myuser | localhost | mydb | Query | 129 | updating | DELETE FROM semaphore WHERE (value = '584023605761bad5c4a925.89704284') | 0.000 | | 8003 | myuser | localhost | mydb | Query | 112 | updating | UPDATE history SET timestamp='1466022629' WHERE ( (uid = '1') AND (nid = '40') ) | 0.000 | | 0.000 | | 8018 | myuser | localhost | mydb | Query | 53 | updating | DELETE FROM semaphore WHERE (value = '6705735415761bb221d33c0.93445495') | 0.000 | +------+---------------+---------------------+----------+---------+------+----------+--------------------------------------------------------------------------------------------------+----------+ Upd2. explain UPDATE history SET timestamp='1466022629' WHERE ( (uid = '1') AND (nid = '40') ); +------+-------------+-------------------------+-------+---------------+---------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +------+-------------+-------------------------+-------+---------------+---------+---------+------+------+-------------+ | 1 | SIMPLE | history | range | PRIMARY,nid | PRIMARY | 8 | NULL | 1 | Using where | +------+-------------+-------------------------+-------+---------------+---------+---------+------+------+-------------+ 1 row in set (0.01 sec) Engine Inndb Status
Windy Wanderer (1 rep)
Jun 15, 2016, 12:14 PM • Last activity: Nov 5, 2019, 08:01 PM
2 votes
1 answers
7130 views
mysqldump: Got error: 1017: Can't find file: 'drupal_install_test' (errno: 2) when using LOCK TABLES
I'm trying to backup a drupal site database but I'm having some issues. When I ran the following command: mysqldump -uroot -p drupaSite > drupaSite.sql I get the following error: mysqldump: Got error: 1017: Can't find file: 'drupal_install_test' (errno: 2) when using LOCK TABLES if I tried to query...
I'm trying to backup a drupal site database but I'm having some issues. When I ran the following command: mysqldump -uroot -p drupaSite > drupaSite.sql I get the following error: mysqldump: Got error: 1017: Can't find file: 'drupal_install_test' (errno: 2) when using LOCK TABLES if I tried to query the table I get the same error: mysql> select * from drupal_install_test; ERROR 1017 (HY000): Can't find file: 'drupal_install_test' (errno: 2) I check the status of the table in the database: show table status from drupaSite; I get the following output: | drupal_install_test | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Can't find file: 'drupal_install_test' (errno: 2) | I ran the following query: SELECT * FROM information_schema.tables WHERE table_name='drupal_install_test'\G I get the following output: *************************** 1. row *************************** TABLE_CATALOG: NULL TABLE_SCHEMA: drupaSite TABLE_NAME: drupal_install_test TABLE_TYPE: BASE TABLE ENGINE: NULL VERSION: NULL ROW_FORMAT: NULL TABLE_ROWS: NULL AVG_ROW_LENGTH: NULL DATA_LENGTH: NULL MAX_DATA_LENGTH: NULL INDEX_LENGTH: NULL DATA_FREE: NULL AUTO_INCREMENT: NULL CREATE_TIME: NULL UPDATE_TIME: NULL CHECK_TIME: NULL TABLE_COLLATION: NULL CHECKSUM: NULL CREATE_OPTIONS: NULL TABLE_COMMENT: Can't find file: 'drupal_install_test' (errno: 2) I ran the following query: CHECKSUM TABLE drupal_install_test; I got the following output: +-------------------------------+----------+ | Table | Checksum | +-------------------------------+----------+ | drupaSite.drupal_install_test | NULL | +-------------------------------+----------+ 1 row in set, 1 warning (4.34 sec) I ran the following query: CHECK TABLE drupal_install_test; and I get the following output: +-------------------------------+-------+----------+---------------------------------------------------+ | Table | Op | Msg_type | Msg_text | +-------------------------------+-------+----------+---------------------------------------------------+ | drupaSite.drupal_install_test | check | Error | Can't find file: 'drupal_install_test' (errno: 2) | | drupaSite.drupal_install_test | check | error | Corrupt | +-------------------------------+-------+----------+---------------------------------------------------+ 2 rows in set (0.02 sec) My question for you guys is how can I fix this in a way I can backup the database and restore it in another server. The site is working just fine I need to migrate the server. I would really appreciate your help guys.
HelenaM
Apr 23, 2013, 04:52 PM • Last activity: Mar 29, 2017, 08:40 AM
1 votes
0 answers
64 views
How do I deduplicate Drupal taxonomies on a PostgreSQL database?
# Background I've been tasked with deduplicating a Drupal 7 install. There are about 15 thousand articles published, which have been imported directly into the database using some custom in-house migration script, rather than via the usual Drupal workflow. Every article has a certain taxonomy item a...
# Background I've been tasked with deduplicating a Drupal 7 install. There are about 15 thousand articles published, which have been imported directly into the database using some custom in-house migration script, rather than via the usual Drupal workflow. Every article has a certain taxonomy item associated with it. There are around 315 different possible values for this taxonomy, and it is *always* present on all articles, having been inserted using the migration script. # Issues However, it appers that some data duplication happened during the initial import. There are 15020 rows in the node table, and 11751 rows in the taxonomy_term_data table when filtered with the corresponding vid for this taxonomy (2). I ran the following SQL query to try to find repetitions SELECT name, count(name) as repetitions FROM taxonomy_term_data WHERE vid = 2 GROUP BY name ORDER BY repetitions desc, name; The result shows 335 rows, which is, coincidentally, the expected amount of unique terms. The "repetitions" column indicates nearly every term shows up dozens or even hundreds of times. # Database structure The table node contains the articles, the table taxonomy_term_data holds the taxonomies and the table taxonomy_index holds the relationship between each. ## node table nid serial NOT NULL, -- The primary identifier for a node. vid bigint, -- The current node_revision.vid version identifier. type character varying(32) NOT NULL DEFAULT ''::character varying, -- The node_type.type of this node. language character varying(12) NOT NULL DEFAULT ''::character varying, -- The languages.language of this node. title character varying(255) NOT NULL DEFAULT ''::character varying, -- The title of this node, always treated as non-markup plain text. uid integer NOT NULL DEFAULT 0, -- The users.uid that owns this node; initially, this is the user that created it. status integer NOT NULL DEFAULT 1, -- Boolean indicating whether the node is published (visible to non-administrators). created integer NOT NULL DEFAULT 0, -- The Unix timestamp when the node was created. changed integer NOT NULL DEFAULT 0, -- The Unix timestamp when the node was most recently saved. comment integer NOT NULL DEFAULT 0, -- Whether comments are allowed on this node: 0 = no, 1 = closed (read only), 2 = open (read/write). promote integer NOT NULL DEFAULT 0, -- Boolean indicating whether the node should be displayed on the front page. sticky integer NOT NULL DEFAULT 0, -- Boolean indicating whether the node should be displayed at the top of lists in which it appears. tnid bigint NOT NULL DEFAULT 0, -- The translation set id for this node, which equals the node id of the source post in each set. translate integer NOT NULL DEFAULT 0, -- A boolean indicating whether this translation page needs to be updated. CONSTRAINT node_pkey PRIMARY KEY (nid), CONSTRAINT node_vid_key UNIQUE (vid), CONSTRAINT node_nid_check CHECK (nid >= 0), CONSTRAINT node_tnid_check CHECK (tnid >= 0), CONSTRAINT node_vid_check CHECK (vid >= 0) ## taxonomy_term_data table tid serial NOT NULL, -- Primary Key: Unique term ID. vid bigint NOT NULL DEFAULT 0, -- The taxonomy_vocabulary.vid of the vocabulary to which the term is assigned. name character varying(255) NOT NULL DEFAULT ''::character varying, -- The term name. description text, -- A description of the term. format character varying(255), -- The filter_format.format of the description. weight integer NOT NULL DEFAULT 0, -- The weight of this term in relation to other terms. CONSTRAINT taxonomy_term_data_pkey PRIMARY KEY (tid), CONSTRAINT taxonomy_term_data_tid_check CHECK (tid >= 0), CONSTRAINT taxonomy_term_data_vid_check CHECK (vid >= 0) ## taxonomy_index table nid bigint NOT NULL DEFAULT 0, -- The node.nid this record tracks. tid bigint NOT NULL DEFAULT 0, -- The term ID. sticky smallint DEFAULT 0, -- Boolean indicating whether the node is sticky. created integer NOT NULL DEFAULT 0, -- The Unix timestamp when the node was created. weight integer NOT NULL DEFAULT 0, -- A user-defined weight for each node in its respective category. CONSTRAINT taxonomy_index_nid_check CHECK (nid >= 0), CONSTRAINT taxonomy_index_tid_check CHECK (tid >= 0) # What I need - A way to determine if any of the repeated taxonomy_term_data is actually referenced in taxonomy_index. - A way to, if necessary, set all occurrences in taxonomy_index to point to just one of each repeated taxonomy_term_data - Finally, a way to delete all the taxonomy_term_data entries not in use in taxonomy_index. I suppose one or more well-written queries would do the trick, but my SQL knowledge is terribly low.
That Brazilian Guy (111 rep)
Sep 30, 2016, 09:00 PM • Last activity: Oct 1, 2016, 12:54 PM
1 votes
1 answers
8450 views
MySQL starts using > 90% of CPU, "Copying to tmp table on disk" and "converting HEAP to MyISAM"
I have a server where MySQL is periodically taking up 11 of the 12 CPU cores. Once this starts it doesn't end until I restart MySQL. At least, I haven't let it run like this for more than a half hour since it takes down a live website. I haven't been able to notice any patterns for when or why this...
I have a server where MySQL is periodically taking up 11 of the 12 CPU cores. Once this starts it doesn't end until I restart MySQL. At least, I haven't let it run like this for more than a half hour since it takes down a live website. I haven't been able to notice any patterns for when or why this happens. It's not at times of particularly high traffic and the queries being run are not abnormal to what is normally run. Here's what I get from PROCESS LIST; mysql> show processlist; +---------+------------+----------------------+-----------------+-------------+--------+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +---------+------------+----------------------+-----------------+-------------+--------+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+ | 1001328 | replicator | example.linode:59815 | NULL | Binlog Dump | 316247 | Master has sent all binlog to slave; waiting for binlog to be updated | NULL | | 2356160 | example | example:33965 | example | Query | 398 | converting HEAP to MyISAM | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2356492 | example | example:34727 | example | Query | 318 | converting HEAP to MyISAM | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2361305 | example | example:46892 | example | Query | 771 | Copying to tmp table on disk | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | [... 198 more lines like the one above, "Copying to tmp table on disk" ...] | 2362323 | example | example:49232 | example | Query | 695 | Copying to tmp table on disk | SELECT COUNT(*) AS expression FROM (SELECT DISTINCT node.title AS node_title, node.nid AS nid, node | | 2362328 | example | example:49244 | example | Query | 694 | Copying to tmp table on disk | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | [... 164 more lines like the one above, "Copying to tmp table on disk" ...] | 2362857 | example | example:50507 | example | Query | 428 | converting HEAP to MyISAM | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362858 | example | example:50509 | example | Query | 427 | Copying to tmp table on disk | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362859 | example | example:50511 | example | Query | 428 | Copying to tmp table on disk | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362860 | example | example:50514 | example | Query | 427 | converting HEAP to MyISAM | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362861 | example | example:50516 | example | Query | 427 | Copying to tmp table on disk | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362862 | example | example:50519 | example | Query | 427 | converting HEAP to MyISAM | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362863 | example | example:50522 | example | Query | 427 | converting HEAP to MyISAM | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362864 | example | example:50524 | example | Query | 425 | Copying to tmp table on disk | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362865 | example | example:50526 | example | Query | 425 | Copying to tmp table on disk | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362866 | example | example:50529 | example | Query | 424 | Copying to tmp table on disk | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362868 | example | example:50533 | example | Query | 423 | Copying to tmp table on disk | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362871 | example | example:50537 | example | Query | 422 | Copying to tmp table on disk | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362872 | example | example:50539 | example | Query | 422 | converting HEAP to MyISAM | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362873 | example | example:50544 | example | Query | 421 | converting HEAP to MyISAM | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362874 | example | example:50546 | example | Query | 421 | converting HEAP to MyISAM | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362875 | example | example:50548 | example | Query | 421 | Copying to tmp table on disk | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362876 | example | example:50552 | example | Query | 419 | converting HEAP to MyISAM | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362877 | example | example:50556 | example | Query | 418 | converting HEAP to MyISAM | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362878 | example | example:50559 | example | Query | 417 | Copying to tmp table on disk | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362880 | example | example:50565 | example | Query | 417 | converting HEAP to MyISAM | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362881 | example | example:50567 | example | Query | 416 | converting HEAP to MyISAM | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362882 | example | example:50570 | example | Query | 413 | Copying to tmp table on disk | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | | 2362883 | example | example:50572 | example | Query | 412 | converting HEAP to MyISAM | SELECT DISTINCT node.title AS node_title, node.nid AS nid, node.created AS node_created, node.nid AS | [... 67 more lines like the one above, "converting HEAP to MyISAM" ...] | 2362976 | root | localhost | NULL | Query | 0 | NULL | show processlist | +---------+------------+----------------------+-----------------+-------------+--------+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+ 452 rows in set (0.00 sec) As you can see, it's about 375 lines of "Copying tmp table on disk" and about 75 of "converting HEAP to MyISAM". My first reaction is to follow the instructions in this StackOverflow answer from @RolandoMySQLDBA and either increase the variables tmp_table_size and/or max_heap_table_size (but how can I know the amount) or set them as small as possible and create a RAM disk for MySQL's temp directory. But I thought I'd ask before doing something I don't completely understand. What can I do to further understand the problem? What are the likely issues causing this? Are either of those ideas (increase tmp_table_size, max_heap_table_size; or create RAM disk) applicable to this situation? Some more details: Server has 32GB of RAM of which I would be comfortable dedicating 10GB to a RAM disk. It's running Ubuntu 14.04.3 and MySQL Ver 14.14 Distrib 5.5.44, for debian-linux-gnu (x86_64) using readline 6.3. The MySQL server is being replicated to a slave at a different host. The vast majority of the tables are InnoDB. Customized MySQL variables: [mysqld] server-id = bind-address = log_bin = /var/log/mysql/mysql-bin.log query_cache_limit = 6M query_cache_size = 128M innodb_file_per_table default-storage-engine = InnoDB innodb_flush_log_at_trx_commit = 1 sync_binlog = 1 relay_log_purge = 1 relay_log_space_limit = 10G max_connections = 1024 innodb_buffer_pool_size = 4G # Set based on recommendations from http://dba.stackexchange.com/a/39504/34815 innodb_buffer_pool_instances = 4 # @see https://dev.mysql.com/doc/refman/5.5/en/innodb-multiple-buffer-pools.html Also: mysql> SHOW VARIABLES LIKE "%_table_size"; +---------------------+----------+ | Variable_name | Value | +---------------------+----------+ | max_heap_table_size | 16777216 | | tmp_table_size | 16777216 | +---------------------+----------+ 2 rows in set (0.00 sec) This is out of my area of expertise. Any help is appreciated. **Update** Per the request of Raymond Nijland, the SHOW CREATE TABLE main_node output. main_node is the main table being queried above, though it was likely joined with several other tables. I foolishly forgot to to run SHOW FULL PROCESSLIST; before restarting MySQL. Note that there should be nothing special about this table, it is one of the most used tables of Drupal sites. CREATE TABLE main_node ( nid int(10) unsigned NOT NULL AUTO_INCREMENT COMMENT 'The primary identifier for a node.', vid int(10) unsigned DEFAULT NULL COMMENT 'The current main_node_revision.vid version identifier.', type varchar(32) NOT NULL DEFAULT '' COMMENT 'The main_node_type.type of this node.', language varchar(12) NOT NULL DEFAULT '' COMMENT 'The main_languages.language of this node.', title varchar(255) NOT NULL DEFAULT '' COMMENT 'The title of this node, always treated as non-markup plain text.', uid int(11) NOT NULL DEFAULT '0' COMMENT 'The main_users.uid that owns this node; initially, this is the user that created it.', status int(11) NOT NULL DEFAULT '1' COMMENT 'Boolean indicating whether the node is published (visible to non-administrators).', created int(11) NOT NULL DEFAULT '0' COMMENT 'The Unix timestamp when the node was created.', changed int(11) NOT NULL DEFAULT '0' COMMENT 'The Unix timestamp when the node was most recently saved.', comment int(11) NOT NULL DEFAULT '0' COMMENT 'Whether comments are allowed on this node: 0 = no, 1 = closed (read only), 2 = open (read/write).', promote int(11) NOT NULL DEFAULT '0' COMMENT 'Boolean indicating whether the node should be displayed on the front page.', sticky int(11) NOT NULL DEFAULT '0' COMMENT 'Boolean indicating whether the node should be displayed at the top of lists in which it appears.', tnid int(10) unsigned NOT NULL DEFAULT '0' COMMENT 'The translation set id for this node, which equals the node id of the source post in each set.', translate int(11) NOT NULL DEFAULT '0' COMMENT 'A boolean indicating whether this translation page needs to be updated.', PRIMARY KEY (nid), UNIQUE KEY vid (vid), KEY node_changed (changed), KEY node_created (created), KEY node_frontpage (promote,status,sticky,created), KEY node_status_type (status,type,nid), KEY node_title_type (title,type(4)), KEY node_type (type(4)), KEY uid (uid), KEY tnid (tnid), KEY translate (translate), KEY language (language) ) ENGINE=InnoDB AUTO_INCREMENT=58237 DEFAULT CHARSET=utf8 COMMENT='The base table for nodes.';
donut (141 rep)
Sep 21, 2015, 09:34 PM • Last activity: Oct 26, 2015, 05:07 PM
0 votes
2 answers
1228 views
Optimization of variables in my.cnf (mysql) for Drupal
I just get a Dedicated Server running UBUNTU and I need to optimize the variables in my.cnf for **Drupal 7**. I understand that this is tailed job. Then **Is there any tool to help me to do this?** I am a newby at this. I found this [article][1] (which seems a little outdated). It shows some values...
I just get a Dedicated Server running UBUNTU and I need to optimize the variables in my.cnf for **Drupal 7**. I understand that this is tailed job. Then **Is there any tool to help me to do this?** I am a newby at this. I found this article (which seems a little outdated). It shows some values for a server with 500mb of RAM. My server has 16Gb of RAM. I guess is not as ease as to multiply those values by 32...? I use InnoDB tables. For a setup with 500mb of RAM your my.cnf file may look like this: [mysqld] max_connections = 150 max_user_connections = 150 key_buffer = 36M myisam_sort_buffer_size = 64M join_buffer_size = 2M read_buffer_size = 2M sort_buffer_size = 3M table_cache = 1024 thread_cache_size = 286 interactive_timeout = 25 wait_timeout = 1800 connect_timeout = 10 max_allowed_packet = 16M max_connect_errors = 1000 query_cache_limit = 1M query_cache_size = 16M query_cache_type = 1 tmp_table_size = 16M innodb-flush-log-at-trx-commit=2
chefnelone (185 rep)
Sep 4, 2015, 07:58 AM • Last activity: Sep 4, 2015, 04:37 PM
1 votes
1 answers
1944 views
MYSQL crashes on XAMPP while working with drupal
Good day, just today morning i was using drupal to add some content, it crashed and mysql shutdown unexpectedly at xampp!! last thing i was doing was inserting a data into a field with "auto complete entity reference"..but after inserting, it gave me an error in red: "no data exist in entity referen...
Good day, just today morning i was using drupal to add some content, it crashed and mysql shutdown unexpectedly at xampp!! last thing i was doing was inserting a data into a field with "auto complete entity reference"..but after inserting, it gave me an error in red: "no data exist in entity reference", next thing i did to fix my insertion, it got this error for mysql crashing! i tried to restarted it many time, but each time it started, it doesn't respond to anything, then shuts down again. sometimes i was hardly able to logon to phpmyadmin, check the database, (all looed ok to me) but before going deep, it crashed again! (its been just less than 1 min time, and slow response before the crash!) i tried writing any sql statment(i thought it has something to do with drupal cache, so i wanted to delete their data), but it hangs and eventually mysql shuts down again. here's a screen shot of drupals' initial error: drupal error but after trying to refresh and restart mysql at xampp, i get this error: mysql error i looked around google and stack exchange, and found that i need to check the error log file at mysql..after long hours of determining the problem i THINK i found my problem is with a corrupt table: (i couldnt post all lines of error reporting here cuz they're too long) so from between the lines i found this: >InnoDB: table "bitnami_drupal7"."field_revision_field_citizenship"iis corrupted. Please drop the table and recreate. > >[Warning] InnoDB: Cannot open table bitnami_drupal7/field_revision_field_citizenship from the internal data dictionary of InnoDB though the .frm file for the table exists. See http://dev.mysql.com/doc/refman/5.6/en/innodb-troubleshooting.html for how you can resolve the problem. i was surprised that a whole service is shutdown because a simple table, how come? and why would a table be corrupted by itself? i mean i was just minding my own business with drupal contents and adding setting and thing(not even coding, nothing), how could such a thing happen? i found some solutions, (the one with increasing the packet size didnt work btw, cuz it happened before and it solved, but now it seems different), here are some solutions which say: (1). >[mysqld] > > innodb_force_recovery = 1 or (2). > Delete: "ibdata1" file at "C:\xampp\mysql\data" so, the reason im asking here is that, are those methods safe? will they not affect my drupal tables? it would be disastrous for me if anything goes wrong, because 1st solution says "FORCE" which sounds scary and could end up corrupting things based on what i read. (im not entirely sure though) also, 2nd solution which is deleting that file (im not sure wht that file is or does), does that file contain any database stuff that affect my database files? or is it only for configuring purpose? i mean maybe some people are not having serious stuff in database so they tried it and worked for them. but for me its very critical if anything bad happens to my data from those solution. I'd like to know if those solution wont affect my data. I'm pretty new to this database stuff, so im not sure..thats why im trying to take a cautious step to ask here. Please guide me from your experiences. Thanks
Abdulaziz Hamdan (123 rep)
Jul 6, 2015, 12:59 PM • Last activity: Jul 9, 2015, 04:35 AM
0 votes
1 answers
265 views
Count(*) as rolecount, filter by rolecount = 1
How can i filter out the rows who only got rolecount = 1? WHERE rolecount = 1 gives an error. Can't really figure out how to do it. SELECT u.uid AS uid, u.name AS name, u.picture AS picture, n.field_full_name_value AS field_full_name_value, f.field_function_value AS field_function_value, pi.field_ph...
How can i filter out the rows who only got rolecount = 1? WHERE rolecount = 1 gives an error. Can't really figure out how to do it. SELECT u.uid AS uid, u.name AS name, u.picture AS picture, n.field_full_name_value AS field_full_name_value, f.field_function_value AS field_function_value, pi.field_phone_intern__value AS field_phone_intern__value, ur.rid AS rid, COUNT(*) AS rolecount FROM dev_drupal_wurth-intranet.users u LEFT OUTER JOIN dev_drupal_project.users_roles ur ON u.uid = ur.uid LEFT OUTER JOIN dev_drupal_project.field_data_field_full_name n ON u.uid = n.entity_id LEFT OUTER JOIN dev_drupal_project.field_data_field_function f ON u.uid = f.entity_id LEFT OUTER JOIN dev_drupal_project.field_data_field_phone_intern_ pi ON u.uid = pi.entity_id GROUP BY u.uid;
mgoubert (103 rep)
Aug 20, 2014, 07:01 AM • Last activity: Aug 20, 2014, 07:19 AM
1 votes
0 answers
1484 views
What to do with unused RAM on a MySQL database server?
For years I have been running dedicated servers with more or less limited ressources, resulting in unsatisfying performance of the web application (Drupal). Tools like Matthew Montgomery's *MySQL Performance Tuning Primer* and Major Hayden's *MySQLTuner* always suggested parameters that required mor...
For years I have been running dedicated servers with more or less limited ressources, resulting in unsatisfying performance of the web application (Drupal). Tools like Matthew Montgomery's *MySQL Performance Tuning Primer* and Major Hayden's *MySQLTuner* always suggested parameters that required more RAM than those servers had. Almost every article about database performance repeated the same mantra: "rDBMS require as much memory as possible to work fast", "you can never have enough RAM". When I set up my latest dedicated database server a couple of months ago, I learned that this isn't the whole truth (most of you probably will know this already). The current database server (Intel Xeon E5-1620v2 @3.7/3.9 GHz, 4 cores/8 threads, 64 GB RAM) isn't even excessively well equipped, however I have been unable to make full use of it's available memory because MySQL 5.5.37-0+wheezy1 (Debian) won't become faster if the relevant tunable parameters get more ressources. In fact, above a certain "sweet spot", MySQL's performance becomes worse than it is with less RAM. This was an surprising finding I hadn't expected. In the past weeks I did some research and ran lots of tests; my results are consistent, and others experienced similar limitations of MySQL as well and documented it on the web. Some examples: **query_cache_size** - defaults on Debian to 16M; in my case, the "sweet spot" appears to be between 256M and 512M. With 1G or even 2G, performs significantly slower than with the default 16M (cf. stackoverflow.com/questions/2095614/mysql-query-caching-limited-to-a-maximum-cache-size-of-128-mb, blogs.oracle.com/dlutz/entry/mysql_query_cache_sizing). **join_buffer_size** - started tuning with "256" and increased in small steps to "15M"; with more memory, MySQL gets slower (cf. dba.stackexchange.com/questions/53201/mysql-creates-temporary-tables-on-disk-how-do-i-stop-it). **tmp_table_size** and **max_heap_table_size** - those default on Debian to "32M"; I increased those values in small steps to "12G" each; with more memory, MySQL becomes significantly slower and - even worse - the number of temporary tables created on disk does not decrease. It's always around 36%-38%, no matter if tmp_table_size and max_heap_table_size is "25G" each or "10G" each. Actually I'm currently working my way back down again to find the "sweet spot" (probably below "10G"; cf. dba.stackexchange.com/questions/53201/mysql-creates-temporary-tables-on-disk-how-do-i-stop-it). **innodb_buffer_pool_size** - I started with "1G" and increased to "24G". More memory does not result in better database performance (cf. www.mysqlperformanceblog.com/2007/11/03/choosing-innodb_buffer_pool_size/; dba.stackexchange.com/questions/19164/what-to-set-innodb-buffer-pool-and-why/19181; dba.stackexchange.com/questions/39467/mysql-performance-impact-of-increasing-innodb-buffer-pool-size; dev.mysql.com/doc/refman/5.0/en/innodb-parameters.html) Bottomline after three months of trial & error with the MySQL configuration: Even on heavy load, MySQL plus OS do not require significantly more than 25 GB of RAM. If I force significantly more RAM upon MySQL, the web application becomes slower than when running with an MySQL with Debian's very conservative default settings. The most plausible explanation for this behaviour I could find is, that MySQL's caching algorithms are buggy at some point and/or not fully optimized. Currently I'm in the bizarre situation to have 1/2 - 1/3 of the database server's memory vacant. For the time being I added some of it to a memcache cluster (currently using 26G of the server's memory accordung to 'top'). Still the server has ~20G RAM vacant. Questions: a) Is there a more beneficial way to make use of this memory with MySQL, and b) which tools are recommended to get hints when *MySQL Performance Tuning Primer* and *MySQLTuner* can not suggest anything useful anymore? **Excerpt from my.cnf** … pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP max_connections = 150 table_cache = 15000 table_definition_cache = 5000 thread_concurrency = 8 query_cache_limit = 2M query_cache_size = 512M join_buffer_size = 15M tmp_table_size = 10G max_heap_table_size = 10G sort_buffer_size = 1M expire_logs_days = 10 max_binlog_size = 100M innodb_buffer_pool_size = 24G innodb_flush_method=O_DIRECT innodb_io_capacity = 2000 innodb_read_io_threads = 8 innodb_thread_concurrency = 0 innodb_write_io_threads = 8 innodb_commit_concurrency = 16 … **Size of the databases:** # du -h /var/lib/mysql 31G /var/lib/mysql **Number of tables:** mysql> SELECT IFNULL(table_schema,'Total') "Database",TableCount -> FROM (SELECT COUNT(1) TableCount,table_schema -> FROM information_schema.tables -> WHERE table_schema NOT IN ('information_schema','mysql') -> GROUP BY table_schema WITH ROLLUP) A; +-----------------------+------------+ | Database | TableCount | +-----------------------+------------+ | Total | 3506 | +-----------------------+------------+ **Size of InnoDB data and indexes:** mysql> SELECT SUM(data_length+index_length)/1024/1024 AS -> total_InnoDB_size_in_MB -> FROM INFORMATION_SCHEMA.TABLES -> WHERE engine = 'InnoDB'; +-------------------------+ | total_InnoDB_size_in_MB | +-------------------------+ | 20355.12500000 | +-------------------------+ **Key figures from MySQLTuner:** >> MySQLTuner 1.1.1 - Major Hayden -------- Storage Engine Statistics -------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 278M (Tables: 50) [--] Data in InnoDB tables: 10G (Tables: 3439) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) -------- Performance Metrics -------------------------------------- [--] Reads / Writes: 76% / 24% [--] Total buffers: 34.5G global + 16.6M per thread (150 max threads) [OK] Maximum possible memory usage: 37.0G (58% of installed RAM) [OK] Slow queries: 0% (1K/265M) [OK] Key buffer size / total MyISAM indexes: 16.0M/160.4M [OK] Key buffer hit rate: 100.0% (2B cached / 36K reads) [OK] Query cache efficiency: 73.9% (179M cached / 243M selects) [!!] Query cache prunes per day: 6750338 [OK] Sorts requiring temporary tables: 0% (15K temp sorts / 11M sorts) [!!] Joins performed without indexes: 501667 [!!] Temporary tables created on disk: 39% (7M on disk / 19M total) [OK] Thread cache hit rate: 89% (123K created / 1M connections) [OK] Table cache hit rate: 35% (5K open / 16K opened) [OK] Open file limit used: 0% (163/30K) [OK] Table locks acquired immediately: 99% (126M immediate / 126M locks) [OK] InnoDB data size / buffer pool: 10.3G/24.0G **Key figures from MySQL Tuning Primer:** INNODB STATUS Current InnoDB index space = 9.59 G Current InnoDB data space = 10.27 G Current InnoDB buffer pool free = 74 % Current innodb_buffer_pool_size = 24.00 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 26.97 G Configured Max Per-thread Buffers : 2.42 G Configured Max Global Buffers : 24.53 G Configured Max Memory Limit : 26.95 G Physical Memory : 62.94 G Max memory limit seem to be within acceptable norms … Current query_cache_size = 512 M Current query_cache_used = 429 M Current query_cache_limit = 2 M Current Query cache Memory fill ratio = 83.79 % Current query_cache_min_res_unit = 4 K However, 39248536 queries have been removed from the query cache due to lack of memory Perhaps you should raise query_cache_size JOINS Current join_buffer_size = 15.00 M You have had 502190 queries where a join could not use an index properly join_buffer_size >= 4 M This is not advised OPEN FILES LIMIT Current open_files_limit = 30160 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 15000 tables Current table_definition_cache = 5000 tables You have a total of 3530 tables You have 5784 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 10.00 G Current tmp_table_size = 10.00 G Of 11854537 temp tables, 39% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables
user153878 (11 rep)
Jul 1, 2014, 11:50 PM
0 votes
1 answers
2790 views
My database keeps crashing on startup how can i repair it without it running?
I've got some problems with my database. I hope somebody can help me out. Problem: The database keeps crashing on startup. In my log i see a couple off these errors: [ERROR] /usr/libexec/mysqld: Table '.tablename' is marked as crashed and should be repaired Normally i would repair this table using P...
I've got some problems with my database. I hope somebody can help me out. Problem: The database keeps crashing on startup. In my log i see a couple off these errors: [ERROR] /usr/libexec/mysqld: Table '.tablename' is marked as crashed and should be repaired Normally i would repair this table using PHPMyAdmin or SSH but is it possible to do this without the database running? (because when i start it it crashes and stops) Or is there any other way to get this all working again? I've got a back up but i will lose some data. The table that is crashed contains cache data, so i could miss that. Thank you! Gr. Matthijs ## UPDATE by RolandoMySQLDBA - Please post contents of /etc/my.cnf - Is the DB Server bare metal machine or VM ? - How much RAM does the server have ? - Are all the tables InnoDB, MyISAM, or a mixture of both ? - Is the OS Linux or Windows ?
user3166736 (1 rep)
Jan 6, 2014, 08:12 PM • Last activity: Jan 8, 2014, 08:46 PM
0 votes
1 answers
1098 views
Combine results from two queries?
I need to combine the results from two queries into one result set and order them altogether. This is the working, raw SQL code I have: SELECT * FROM ( SELECT t1.id AS entity_id, t2.title, 'query1' AS entity_type FROM `table1` t1, `table2` t2 WHERE t1.id = t2.id UNION ALL SELECT t3.id AS entity_id,...
I need to combine the results from two queries into one result set and order them altogether. This is the working, raw SQL code I have: SELECT * FROM ( SELECT t1.id AS entity_id, t2.title, 'query1' AS entity_type FROM table1 t1, table2 t2 WHERE t1.id = t2.id UNION ALL SELECT t3.id AS entity_id, t4.title, 'query2' AS entity_type FROM table3 t3, table4 t4 WHERE t3.id = t4.id ) AS results ORDER BY results.entity_id The query above works perfectly, but I need to convert it into a specific syntax for a CMS database API, which does not support such queries. I am looking for something without the SELECT * FROM(subquery) as such queries can be converted easily. Is there any command I can use to UNION ALL and order records from both sub-queries? Thank you!
Aram Boyajyan (131 rep)
Nov 11, 2013, 05:54 PM • Last activity: Nov 11, 2013, 06:43 PM
1 votes
0 answers
326 views
Drupal database doesn't work after MySQL restart
I've installed Drupal 7x on my local XAMPP 1.8.3 Server. Everything went fine but after adding some modules, changing the drupal theme and restarting the MySQL Server, the MySQL Server is dropping some errors and stopped working. *mysql_error.log* > 2013-09-02 16:18:46 2544 [Note] Plugin 'FEDERATED'...
I've installed Drupal 7x on my local XAMPP 1.8.3 Server. Everything went fine but after adding some modules, changing the drupal theme and restarting the MySQL Server, the MySQL Server is dropping some errors and stopped working. *mysql_error.log* > 2013-09-02 16:18:46 2544 [Note] Plugin 'FEDERATED' is disabled. > 2013-09-02 16:18:46 3e8 InnoDB: Warning: Using innodb_additional_mem_pool_size is DEPRECATED. This option may be removed in future releases, together with the option innodb_use_sys_malloc and with the InnoDB's internal memory allocator. > 2013-09-02 16:18:46 2544 [Note] InnoDB: The InnoDB memory heap is disabled > 2013-09-02 16:18:46 2544 [Note] InnoDB: Mutexes and rw_locks use Windows interlocked functions > 2013-09-02 16:18:46 2544 [Note] InnoDB: Compressed tables use zlib 1.2.3 > 2013-09-02 16:18:46 2544 [Note] InnoDB: Not using CPU crc32 instructions > 2013-09-02 16:18:46 2544 [Note] InnoDB: Initializing buffer pool, size = 16.0M > 2013-09-02 16:18:46 2544 [Note] InnoDB: Completed initialization of buffer pool > 2013-09-02 16:18:46 2544 [Note] InnoDB: Highest supported file format is Barracuda. > 2013-09-02 16:18:47 2544 [Note] InnoDB: The log sequence numbers 1600614 and 1600614 in ibdata files do not match the log sequence number 1600644 in the ib_logfiles! > 2013-09-02 16:18:47 2544 [Note] InnoDB: Database was not shutdown normally! > 2013-09-02 16:18:47 2544 [Note] InnoDB: Starting crash recovery. > 2013-09-02 16:18:47 2544 [Note] InnoDB: Reading tablespace information from the .ibd files... > 2013-09-02 16:18:47 2544 [ERROR] InnoDB: Attempted to open a previously opened tablespace. Previous tablespace drupal/variable uses space ID: 2 at filepath: .\drupal\variable.ibd. Cannot open tablespace mysql/innodb_index_stats which uses space ID: 2 at filepath: .\mysql\innodb_index_stats.ibd > InnoDB: Error: could not open single-table tablespace file .\mysql\innodb_index_stats.ibd > InnoDB: We do not continue the crash recovery, because the table may become > InnoDB: corrupt if we cannot apply the log records in the InnoDB log to it. > InnoDB: To fix the problem and start mysqld: > InnoDB: 1) If there is a permission problem in the file and mysqld cannot > InnoDB: open the file, you should modify the permissions. > InnoDB: 2) If the table is not needed, or you can restore it from a backup, > InnoDB: then you can remove the .ibd file, and InnoDB will do a normal > InnoDB: crash recovery and ignore that table. > InnoDB: 3) If the file system or the disk is broken, and you cannot remove > InnoDB: the .ibd file, you can set innodb_force_recovery > 0 in my.cnf > InnoDB: and force InnoDB to continue crash recovery here. I tried to fix the problem with the solutions named above, but it didn't work. I've turned the UAC off (for point 1.: permission problem). Deleted the .idb File for point 2 and to point 3: I've looked everywhere for the my.cnf but couldn't find any. I don't know where to look for the problem/solution :( .
flumingo (111 rep)
Sep 3, 2013, 09:08 AM
1 votes
2 answers
241 views
What is proper database design for a drupal table with an order field?
I am adding a custom widget to an instance of Drupal 6.x & MySQL 5.5 and came across a problem with updating rows. I have a table of recipe ingredients where multiple ingredients are tied to a single recipe by node id (nid) & version id (vid). The primary key is vid, nid & order. The vid & nid are r...
I am adding a custom widget to an instance of Drupal 6.x & MySQL 5.5 and came across a problem with updating rows. I have a table of recipe ingredients where multiple ingredients are tied to a single recipe by node id (nid) & version id (vid). The primary key is vid, nid & order. The vid & nid are related to the recipe nid & vid fields. The table schema is: +-----------------+------------------+ | Field | Type | +-----------------+------------------+ | vid | int(10) unsigned | | nid | int(10) unsigned | | order | int(10) unsigned | | name | varchar(255) | | unit_of_measure | varchar(32) | | quantity | int(10) unsigned | +-----------------+------------------+ The problem comes when attempting to reorder the ingredients. For instance: +-----+-----+-------+---------+-------------------+----------+ | vid | nid | order | name | unit_of_measure | quantity | | 5 | 1 | 1 | Chicken | Lb | 1 | | 5 | 1 | 2 | Rice | Cup | 2 | | 5 | 1 | 3 | Thyme | Tbsp | 3 | +-----+-----+-------+---------+-------------------+----------+ I want to move Thyme to the top of the list but I can't change the order for Thyme to 1 since that primay key already exists (Chicken). I can't move Chicken down to order 2 because Rice is already there, etc... My position is that we should add a unique auto-incrementing int field that will be the sole primary key. Which will enable us to reorder the rows without incident with the possibility that two rows might end up with the same nid, vid, & order. My coworkers position was that to add a unique auto-increment int field is bad design because there should never be two different rows that have the same vid, nid & order. But following this belief there are two ways to implement a reorder of the rows 1. Update each row's order with some large number (ie- 1001, 1002, 1003) so that the original order is no longer conflicting, then update each row with the correct order values (1, 2, 3). 2. Delete each row that has the same nid & vid, then insert all the rows again in the correct order. From the database's perspective, what is the correct approach?
Patrick (4329 rep)
May 29, 2013, 08:07 PM • Last activity: Aug 28, 2013, 11:51 AM
1 votes
1 answers
260 views
Why does copied MySQL database have different data than source?
I am trying to migrate database A to database B. I have tried various methods for this, including... 1. Using Navicat, do Tools > Data Transfer, copying structure and data from A to B 2. Using Navicat, delete all tables from B, then drag and drop all tables (structure and data) from A to B 3. Using...
I am trying to migrate database A to database B. I have tried various methods for this, including... 1. Using Navicat, do Tools > Data Transfer, copying structure and data from A to B 2. Using Navicat, delete all tables from B, then drag and drop all tables (structure and data) from A to B 3. Using Sequel Pro, export A to SQL file, then import SQL file into B Option 1 always fails, and I'm not sure why. The error message just says "failed". Options 2 and 3 seem to work fine, but then some tables have slightly different data, which makes absolutely no sense to me. For example, here is a row of data from database A... node book 0 438 2255 und 0 foo NULL full_html ...and here is the same row in database B... node book 0 438 2255 und 0 bar NULL full_html The "foo" and "bar" are HTML content for web page #438 at revision #2255. I have no idea how the HTML content for the same exact revision can be different, when database B should be a straight copy of A. Mysteriously, "foo" represents the most recent version of page #438, and "bar" actually reflects the prior version of page #438. How is this even possible? Also, if any of you DBAs happen to be familiar with Drupal, note that this is for a Drupal site. Incidentally, when I try to restore a backup on this site (a ZIP or SQL file made with the Backup and Migrate module), I get an unhelpful "site encountered an unexpected error" message and nothing else in the error log. The same ZIP and SQL files restore just fine in other test environments, so something about this one site just seems wonky.
Dave (11 rep)
May 15, 2013, 05:42 PM • Last activity: May 16, 2013, 01:19 AM
Showing page 1 of 16 total questions