Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
1060 views
Unable to show binary logs how to remove logs from mysql?
Server version: 5.1.50-log MySQL Community Server (GPL) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show binary logs ; ##PROBLEM - Data directory is full - I want to remove space - Purge is not making more space. Any idea what to do to clear relay logs and Bin file .
Server version: 5.1.50-log MySQL Community Server (GPL) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show binary logs ; ##PROBLEM - Data directory is full - I want to remove space - Purge is not making more space. Any idea what to do to clear relay logs and Bin file .
Gaurav Jain (3 rep)
Jun 24, 2015, 05:34 AM • Last activity: Aug 3, 2025, 09:03 AM
2 votes
2 answers
508 views
MySQL Upgrade on Windows
I am having MySQL 5.1 installed on windows box. Now I would like to upgrade to latest MySQL 5.6 version. I am planning to install MySQL 5.6 in the same server with same port 3306. After installation, I will map the new installation data directory to the older data directory which is used by MySQL 5....
I am having MySQL 5.1 installed on windows box. Now I would like to upgrade to latest MySQL 5.6 version. I am planning to install MySQL 5.6 in the same server with same port 3306. After installation, I will map the new installation data directory to the older data directory which is used by MySQL 5.1. After this, I will upgrade using mysql_upgrade.exe. Will this plan work out or not ? Also let me know if any precautions to be taken care. Edit: This is production server and the DB is of huge size 100 GB, so we are not able to go with backup and restore mechanism which is time consuming process.
Phanindra (1007 rep)
Jun 17, 2015, 01:49 PM • Last activity: Jul 3, 2025, 11:02 AM
2 votes
2 answers
2438 views
Importing a large mysql dump while replication is running
So we have a simple master/slave mysql replication running between two CentOS Servers. The master has multiple databases. eg.. Database1 Database2 Database3 The issue is we have a mysql dumpfile of a new 60GB database (Database4). What's the best way to import Database4 without breaking replication?...
So we have a simple master/slave mysql replication running between two CentOS Servers. The master has multiple databases. eg..
  • Database1
  • Database2
  • Database3
  • The issue is we have a mysql dumpfile of a new 60GB database (Database4). What's the best way to import Database4 without breaking replication? I was thinking we could stop replication, and import the mysqldump onto both master and slave. Then restart replication, but was hoping there was an alternate way that would minimize downtime.
    user125340 (21 rep)
    May 22, 2017, 09:58 PM • Last activity: Jun 25, 2025, 06:08 PM
    0 votes
    1 answers
    225 views
    MySQL update with inner join taking too long
    I have the following two tables CREATE TABLE `tbl_products_temp` ( `cd_id` INT(6) UNSIGNED ZEROFILL NOT NULL AUTO_INCREMENT, `cd_structure` MEDIUMBLOB NOT NULL, `cd_smiles` TEXT NULL DEFAULT NULL COLLATE 'latin1_swedish_ci', `cd_formula` VARCHAR(100) NULL DEFAULT NULL COLLATE 'latin1_swedish_ci', `c...
    I have the following two tables CREATE TABLE tbl_products_temp ( cd_id INT(6) UNSIGNED ZEROFILL NOT NULL AUTO_INCREMENT, cd_structure MEDIUMBLOB NOT NULL, cd_smiles TEXT NULL DEFAULT NULL COLLATE 'latin1_swedish_ci', cd_formula VARCHAR(100) NULL DEFAULT NULL COLLATE 'latin1_swedish_ci', cd_molweight DOUBLE NULL DEFAULT NULL, cd_hash INT(11) NOT NULL, cd_timestamp DATETIME NOT NULL, cd_pre_calculated TINYINT(1) NOT NULL DEFAULT '0', iupac_name VARCHAR(600) NULL DEFAULT NULL COLLATE 'latin1_swedish_ci', price_currency VARCHAR(3) NULL DEFAULT NULL COLLATE 'latin1_swedish_ci', price1 DECIMAL(10,2) NULL DEFAULT NULL, price2 DECIMAL(10,2) NULL DEFAULT NULL, price3 DECIMAL(10,2) NULL DEFAULT NULL, last_update TIMESTAMP NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(), deleted_dt DATETIME NULL DEFAULT NULL, PRIMARY KEY (cd_id) USING BTREE, INDEX tbl_products_temp_hx (cd_hash) USING BTREE, INDEX tbl_products_temp_px (cd_pre_calculated) USING BTREE, INDEX idx_iupac_name (iupac_name) USING BTREE, INDEX idx_formula (cd_formula) USING BTREE, INDEX idx_molweight (cd_molweight) USING BTREE, INDEX idx_smiles (cd_smiles(100)) USING BTREE ) COLLATE='latin1_swedish_ci' ENGINE=MyIsam CREATE TABLE tbl_products_temp2 ( cd_id INT(6) UNSIGNED ZEROFILL NOT NULL AUTO_INCREMENT, cd_structure MEDIUMBLOB NOT NULL, cd_smiles TEXT NULL DEFAULT NULL COLLATE 'latin1_swedish_ci', cd_formula VARCHAR(100) NULL DEFAULT NULL COLLATE 'latin1_swedish_ci', cd_molweight DOUBLE NULL DEFAULT NULL, cd_hash INT(11) NOT NULL, cd_timestamp DATETIME NOT NULL, cd_pre_calculated TINYINT(1) NOT NULL DEFAULT '0', iupac_name VARCHAR(600) NULL DEFAULT NULL COLLATE 'latin1_swedish_ci', price_currency VARCHAR(3) NULL DEFAULT NULL COLLATE 'latin1_swedish_ci', price1 DECIMAL(10,2) NULL DEFAULT NULL, price2 DECIMAL(10,2) NULL DEFAULT NULL, price3 DECIMAL(10,2) NULL DEFAULT NULL, last_update TIMESTAMP NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(), deleted_dt DATETIME NULL DEFAULT NULL, PRIMARY KEY (cd_id) USING BTREE, INDEX tbl_products_temp_hx (cd_hash) USING BTREE, INDEX tbl_products_temp_px (cd_pre_calculated) USING BTREE, INDEX idx_iupac_name (iupac_name) USING BTREE, INDEX idx_formula (cd_formula) USING BTREE, INDEX idx_molweight (cd_molweight) USING BTREE, INDEX idx_smiles (cd_smiles(100)) USING BTREE ) COLLATE='latin1_swedish_ci' ENGINE=MyIsam Table tbl_products_temp has 241633 rows. Table tbl_products_temp2 has 218380 rows. I'm trying the following update query but its taking forever. UPDATE tbl_products_temp INNER JOIN tbl_products_temp2 ON tbl_products_temp.iupac_name = tbl_products_temp2.iupac_name SET tbl_products_temp.price1 = tbl_products_temp2.price1, tbl_products_temp.price2 = tbl_products_temp2.price2, tbl_products_temp.price3 = tbl_products_temp2.price3, tbl_products_temp.price_currency = 'EUR' When I run explain using the following I get: explain select * from tbl_products_temp inner join tbl_products_temp2 on tbl_products_temp.iupac_name = tbl_products_temp2.iupac_name id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra -------------------------------------------------------------------------------------------------------------------------------------------- 1 | SIMPLE | tbl_products_temp2 | ALL | idx_iupac_name | (NULL) | (NULL) | (NULL) | 218380 | 1 | SIMPLE | tbl_products_temp | ref | idx_iupac_name | idx_iupac_name | 603 | mydb.tbl_products_temp2.iupac_name | 1 | Using where How can I speed up this update query?
    adam78 (155 rep)
    Nov 7, 2022, 05:19 PM • Last activity: Jun 16, 2025, 03:06 PM
    1 votes
    2 answers
    202 views
    Problem with complex queries after migrating from MySQL 5.1 to MariaDB 10.4
    I completed a migration from MySQL 5.1 master-slave system to a MariaDB 10.4 Galera Cluster. All the databases where copied over since they were on InnoDB per-file and they are working fine. However some complex queries are VERY slow on the new system since they are completely useless and I'm a litt...
    I completed a migration from MySQL 5.1 master-slave system to a MariaDB 10.4 Galera Cluster. All the databases where copied over since they were on InnoDB per-file and they are working fine. However some complex queries are VERY slow on the new system since they are completely useless and I'm a little bit loss... I fixed some of them by forcing an index but I don't know how to fix others like this one... This is the query:
    SELECT * FROM (
        SELECT * FROM abvalue WHERE deviceid='XXX'
    ) AS abvalue LEFT JOIN (
        SELECT * FROM abperson WHERE deviceid='XXX'
    ) AS abperson ON abvalue.person_id=abperson.person_id
    (i did the query that way because it was the most efficient way back in the day) this query is a little slow on the old system (about 2 seconds) but it completes. However in the new system I gave up after 60 seconds... I tried forcing indexes but I was unable to fix it. This is the explain on MySQL 5.1
    +----+-------------+------------+------+----------------+----------------+---------+------+-------+-------------+
    | id | select_type | table      | type | possible_keys  | key            | key_len | ref  | rows  | Extra       |
    +----+-------------+------------+------+----------------+----------------+---------+------+-------+-------------+
    |  1 | PRIMARY     |  | ALL  | NULL           | NULL           | NULL    | NULL | 12428 |             |
    |  1 | PRIMARY     |  | ALL  | NULL           | NULL           | NULL    | NULL |   694 |             |
    |  3 | DERIVED     | abperson   | ref  | deviceid_index | deviceid_index | 36      |      |   693 | Using where |
    |  2 | DERIVED     | abvalue    | ref  | deviceid_index | deviceid_index | 36      |      | 16484 | Using where |
    +----+-------------+------------+------+----------------+----------------+---------+------+-------+-------------+
    and this is the explain on MariaDB 10.4
    +------+-------------+----------+------------+-----------------------------+-----------------------------+---------+--------------------------+---------+---------------------------------+
    | id   | select_type | table    | type       | possible_keys               | key                         | key_len | ref                      | rows    | Extra                           |
    +------+-------------+----------+------------+-----------------------------+-----------------------------+---------+--------------------------+---------+---------------------------------+
    |    1 | SIMPLE      | abvalue  | ref        | deviceid_index              | deviceid_index              | 36      | const                    | 22780   | Using where                     |
    |    1 | SIMPLE      | abperson | ref|filter | deviceid_index,person_index | person_index|deviceid_index | 5|36    | ownspy.abvalue.person_id | 39 (0%) | Using where; Using rowid filter |
    +------+-------------+----------+------------+-----------------------------+-----------------------------+---------+--------------------------+---------+---------------------------------+
    this is the structures of the tables:
    CREATE TABLE abvalue (
      id bigint(20) NOT NULL AUTO_INCREMENT,
      deviceid char(32) NOT NULL,
      value char(128) DEFAULT NULL,
      type int(11) DEFAULT NULL,
      person_id int(11) DEFAULT NULL,
      PRIMARY KEY (id),
      KEY deviceid_index (deviceid(12)) USING BTREE,
      KEY person_index (person_id) USING BTREE,
      KEY value_index (value(5)) USING BTREE
    ) ENGINE=InnoDB
    
    CREATE TABLE abperson (
      id bigint(20) NOT NULL AUTO_INCREMENT,
      deviceid char(32) NOT NULL,
      first text DEFAULT NULL,
      last text DEFAULT NULL,
      person_id int(11) DEFAULT NULL,
      PRIMARY KEY (id),
      KEY deviceid_index (deviceid(12)) USING BTREE,
      KEY person_index (person_id) USING BTREE
    ) ENGINE=InnoDB
    Any help is really welcomed!
    Antonio Calatrava (11 rep)
    Jun 20, 2020, 08:17 AM • Last activity: Jun 13, 2025, 01:06 PM
    0 votes
    1 answers
    263 views
    Remote connecting to Asterisk mysql database from internet issue
    I can connect to Asterisk MySQL database (`asteriskcdrdb`) and everything is okay. Asterisk server located in my LAN network. I can to connect to this database from my computer with Mysql client application. Now I want to connect this database from another server that put on internet hosting. I set...
    I can connect to Asterisk MySQL database (asteriskcdrdb) and everything is okay. Asterisk server located in my LAN network. I can to connect to this database from my computer with Mysql client application. Now I want to connect this database from another server that put on internet hosting. I set port forward on ADSL modem for connect to this database, but I can't connect and get this error:
    [08S01]
    	Communications link failure
    
    	The last packet sent successfully to the server was ۰ milliseconds ago. The driver has not received any packets from the server.
    java.net.ConnectException: Connection timed out: connect.
    I checked port forward configuration and everything is okay because I have another Mysql database on windows that work. Also I run service iptables stop on Asterisk server, but I get error. What should I do? Do you have any suggestion for my issue? Thanks
    Ehsan Ali (101 rep)
    Dec 30, 2021, 01:38 PM • Last activity: May 22, 2025, 09:05 AM
    0 votes
    1 answers
    265 views
    merge multiple fields of csv file while running mysqlimport command
    I have a mysql table like this +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | roll_no | varchar(5...
    I have a mysql table like this +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | roll_no | varchar(50) | NO | MUL | NULL | | | marks | varchar(100) | YES | | NULL | | | academy | varchar(100) | YES | | NULL | | +----------------+--------------+------+-----+---------+----------------+ the marks column can have multiple numbers separated by comma like this 45,67,78,80, # First Example 34,56, # Second Example I have to run mysqlimport command such that the fields after the first fields till second-last fields should merge into marks column I tried something like this mysqlimport --fields-terminated-by=, --columns='roll_no, marks, academy' --ignore --local -u root -pxxxx result results.csv But extra fields (after 3rd one) gets truncated. How to handle such scenarios where the csv fields are dynamic
    Anurag Sharma (101 rep)
    Apr 16, 2015, 07:34 AM • Last activity: May 21, 2025, 08:05 PM
    1 votes
    2 answers
    252 views
    Grabbing SQL Dump of Master DB Without Downtime
    I'm curious whether downtime will be necessary to grab a SQL dump of my master database. Right now, I'm in the process of rebuilding my one slave. There is actually only one database from master that is being replicated onto slave. All tables in that database are InnoDB. This is the command I want t...
    I'm curious whether downtime will be necessary to grab a SQL dump of my master database. Right now, I'm in the process of rebuilding my one slave. There is actually only one database from master that is being replicated onto slave. All tables in that database are InnoDB. This is the command I want to run: mysqldump --master-data --single-transaction --hex-blob dbname | gzip > dbname.sql.gz I'm running MySQL 5.1 and here is a redacted version of my my.cnf file: [mysqld] default-storage-engine=InnoDB character-set-server=UTF8 lower_case_table_names=1 transaction_isolation=READ-COMMITTED wait_timeout=86400 interactive_timeout=3600 delayed_insert_timeout=10000 connect_timeout=100000 max_connections=750 max_connect_errors=1000 back_log=50 max_allowed_packet=1G max_heap_table_size=64M tmp_table_size=64M bulk_insert_buffer_size=128M innodb_buffer_pool_size=10000M innodb_data_file_path=ibdata1:256M:autoextend innodb_file_per_table=1 innodb_additional_mem_pool_size=32M innodb_log_file_size=1G innodb_log_buffer_size=8M innodb_flush_method=O_DIRECT innodb_lock_wait_timeout=240 innodb_flush_log_at_trx_commit=2 innodb_open_files=8192 innodb_support_xa=ON thread_cache_size=500 expire_logs_days=2 server-id=1 log_bin=1 binlog_format=MIXED sync_binlog=0 [mysqldump] max_allowed_packet=128M Am I good without downtime or not? I'm concerned about a possible read lock being placed on tables.
    Jordan Parra (19 rep)
    Jul 14, 2016, 01:12 AM • Last activity: May 20, 2025, 12:06 AM
    0 votes
    2 answers
    318 views
    MySQL server 5.1 trigger problem
    I have implemented a trigger. But once I try to update the table it is giving me the error: >can't update table 'booking' in stored function/trigger because it is already used by the statement which invoked this stored function/trigger This is the trigger DELIMITER // CREATE TRIGGER CANCEL_BOOK AFTE...
    I have implemented a trigger. But once I try to update the table it is giving me the error: >can't update table 'booking' in stored function/trigger because it is already used by the statement which invoked this stored function/trigger This is the trigger DELIMITER // CREATE TRIGGER CANCEL_BOOK AFTER UPDATE on BOOKING FOR EACH ROW BEGIN DELETE FROM BOOKING WHERE NEW.F_ID=NULL AND NEW.H_ID=NULL; END // DELIMITER ; please help
    pksv (9 rep)
    Dec 2, 2018, 10:45 AM • Last activity: May 15, 2025, 05:12 AM
    3 votes
    3 answers
    6057 views
    Can I find out what version of MySQL from the data files?
    I have a very odd situation going on here. I had a linux box running ubuntu 8.10 and MySQL server with a Wordpress installation on it hosting an internal blog for our company. The machine that this was running on has crapped out, but I was able to put the drive into an enclosure and gain access to t...
    I have a very odd situation going on here. I had a linux box running ubuntu 8.10 and MySQL server with a Wordpress installation on it hosting an internal blog for our company. The machine that this was running on has crapped out, but I was able to put the drive into an enclosure and gain access to the files and recover the MySQL data folder. We are mostly a Windows company and I have very limited linux experience. Before this machine crashed we were experimenting with moving this blog to a Windows Virtual Machine running PHP for IIS; and we have an older copy of the Wordpress database up and running on MySQL 5.1. I am currently trying to retrieve the latest data from the linux MySQL data folder and import it into the current database but I am having problems. Problems so far: 1) I do not know what version of MySQL was installed on the linux box 2) I see ib_logfile0, ib_logfile1, and ibdata1 on the linux MySQL data folder, but am not sure if it was truly running InnoDB or not as I also have .frm, .myd, and .myi files in the wordpress folder If I can figure out the version of the linux box MySQL I might be able to create a virtual machine to run linux and MySQL so I can copy the data folder and then do an export dump; unless someone else knows of an easier way to get the data into my current database.
    Nathan (31 rep)
    May 1, 2013, 06:48 PM • Last activity: May 1, 2025, 08:42 PM
    0 votes
    2 answers
    335 views
    Importing mysqlnd database to MySQL server
    Importing a mysqlnd (5.0.12) database to MySQL server (5.1.66) causes error? Currently we are performing this task and encountering errors (like below): > ERROR 1146 (42S02): Table 'xx-xxx-xxx-xxx' doesn't exist > > ERROR 1273 (HY000): Unknown collation: 'utf8mb4_unicode_ci' > > ERROR 1115 (42000):...
    Importing a mysqlnd (5.0.12) database to MySQL server (5.1.66) causes error? Currently we are performing this task and encountering errors (like below): > ERROR 1146 (42S02): Table 'xx-xxx-xxx-xxx' doesn't exist > > ERROR 1273 (HY000): Unknown collation: 'utf8mb4_unicode_ci' > > ERROR 1115 (42000): Unknown character set: 'utf8mb4'
    Zakir HC (133 rep)
    Mar 8, 2016, 09:12 AM • Last activity: Apr 23, 2025, 01:06 AM
    2 votes
    1 answers
    1657 views
    MySQL innodb_row_lock_time is more than wall clock time
    We recently started graphing the **innodb_row_lock_time** status variable in our monitoring system. I noticed something I can't understand on a graph for a busy server: Between 04:55PM and 04:57PM, the value of this counter changed from **488,101,000** to **488,504,000**. The MySQL documentation say...
    We recently started graphing the **innodb_row_lock_time** status variable in our monitoring system. I noticed something I can't understand on a graph for a busy server: Between 04:55PM and 04:57PM, the value of this counter changed from **488,101,000** to **488,504,000**. The MySQL documentation says: > innodb_row_lock_time > The total time spent in acquiring row locks, in milliseconds. How is it possible that during those 2 minutes, the server spent **403,000 ms** in acquiring row locks? 2 minutes is only 120,000 ms The only explanation that comes to mind is that this value is a sum of all threads but I cannot find any documentation to confirm this theory. The server is MySQL 5.1
    bodgix (121 rep)
    Apr 28, 2018, 06:04 PM • Last activity: Apr 9, 2025, 08:09 PM
    1 votes
    1 answers
    831 views
    Replicating from MySQL 5.1 master to 5.6 slave failing because 'INSERT ... VALUES (NOW())' results in 'Error_code: 1062'
    I am migrating away from some old MySQL 5.1 servers to some new MySQL 5.6 servers. During this process, I'm creating a new MySQL 5.6 slave from an existing MySQL 5.1 slave, using the procedure in the [mysqldump reference guide][1]. For example, if my MySQL 5.1 servers are named 'master1' and 'replic...
    I am migrating away from some old MySQL 5.1 servers to some new MySQL 5.6 servers. During this process, I'm creating a new MySQL 5.6 slave from an existing MySQL 5.1 slave, using the procedure in the mysqldump reference guide . For example, if my MySQL 5.1 servers are named 'master1' and 'replica1' and I have a new MySQL 5.6 server named 'replica2', the following should make replica2 a second slave of 'master1': replica2 % mysqldump --login-path=replica1 --all-databases --dump-slave --include-master-host-port --apply-slave-statements --lock-all-tables --add-drop-database > all.sql replica2 % mysql < all.sql And this seems work well, but replication fails with the following error complaining about duplicate entries for the primary key. 2015-06-12 10:00:00 1234 [ERROR] Slave SQL: Worker 0 failed executing transaction '' at master log mysql-bin.009332, end_log_pos 12341234; Error 'Duplicate entry '8072' for key 'PRIMARY'' on query. Default database: 'DATABASE'. Query: 'INSERT INTO "Member" ("Created") VALUES (NOW())', Error_code: 1062 Can I assume that 'INSERT INTO "Member" ("Created") VALUES (NOW())' is triggering the error here? Can I get replication to work without skipping rows with SET GLOBAL sql_slave_skip_counter = 1;? Some additional details: - I'm using classic MySQL replication, and GTIDs are currently disabled. - The MySQL 5.1 servers are using STATEMENT-based replication, but the new MySQL 5.6 servers are using ROW-based replication. - I don't own the application code, and I cannot change the SQL.
    Stefan Lasiewski (197 rep)
    Jun 12, 2015, 07:47 PM • Last activity: Feb 17, 2025, 11:02 PM
    0 votes
    1 answers
    2666 views
    Why does max_connections in my.ini not match concurrent connections in Mysql instance config wizard?
    MySQL 5.1.x | Windows Server 2003 Can someone please clarify why `max_connections` in my.ini is much larger than the manual value I specify in the [Concurrent Connections Dialogue][1]? For example, if I set concurrent connections to 800 in the dialogue window, I see `max_connections=1023` in my.ini....
    MySQL 5.1.x | Windows Server 2003 Can someone please clarify why max_connections in my.ini is much larger than the manual value I specify in the Concurrent Connections Dialogue ? For example, if I set concurrent connections to 800 in the dialogue window, I see max_connections=1023 in my.ini. Why? Is this normal?
    Mike B (617 rep)
    Sep 16, 2013, 05:58 PM • Last activity: Jan 23, 2025, 02:05 PM
    1 votes
    1 answers
    5534 views
    The OLE DB provider "MSDASQL" for linked server "(null)" reported an error
    I created a linked server to MySQL database server on my SQL Server database server. I tested the connection. It succeeded successfully for MySQL connection. I can see the databases on MySQL instance via my linked server but when i try to run a query with OPENROWSET, it gives me an error message lik...
    I created a linked server to MySQL database server on my SQL Server database server. I tested the connection. It succeeded successfully for MySQL connection. I can see the databases on MySQL instance via my linked server but when i try to run a query with OPENROWSET, it gives me an error message like below. I have searched it on google many times but i couldn't solve my issue. What is the problem? Why do i get this message and how to solve it? > The OLE DB provider "MSDASQL" for linked server "(null)" reported an > error. The provider reported an unexpected catastrophic failure. Msg > 7330, Level 16, State 2, Line 25 Cannot fetch a row from OLE DB > provider "MSDASQL" for linked server "(null)".
    DBA (69 rep)
    May 26, 2020, 08:35 PM • Last activity: Sep 12, 2024, 05:01 PM
    2 votes
    4 answers
    11824 views
    MySQL: How to avoid case sensitivity in SQL queries when migrate from Windows to Linux?
    I have migrated my Database from Windows to Linux server. In many places, query has been written without focusing their cases (uppercase / lowercase). So, these queries are not getting executed and generating errors. As a result, functionalities based on these queries are breaking. Below is the exam...
    I have migrated my Database from Windows to Linux server. In many places, query has been written without focusing their cases (uppercase / lowercase). So, these queries are not getting executed and generating errors. As a result, functionalities based on these queries are breaking. Below is the example query to run in Windows and Linux: Select A.id,A.title from tablename a; **This query will work in Windows while never work in Linux.** - Is there any way through which I can avoid these case sensitivity in MySQL queries and no need to check and fix every query ?
    ursitesion (2061 rep)
    May 8, 2014, 05:23 AM • Last activity: Jul 7, 2024, 01:23 PM
    1 votes
    3 answers
    453 views
    Need help rewriting view to avoid using subqueries in MySQL 5.1 db
    I am working on creating a view in MySQL for an application that tracks form submissions. There is one particular table that I cannot figure out how to write a query for. It has foreign_keys relationships with a submissions table and a table which keeps track of fields in all the pdfs used by the ap...
    I am working on creating a view in MySQL for an application that tracks form submissions. There is one particular table that I cannot figure out how to write a query for. It has foreign_keys relationships with a submissions table and a table which keeps track of fields in all the pdfs used by the application. A basic overview of the table structure and application is as follows: 1. The frm_form_attribute_data (the table I am struggling with) table stores information relating to a form submission and the data from a specific field on that form. 2. Each entry in the frm_form_attribute_data_table has a unique id, the id of a submission, the id of a form field from a pdf, and the data that was filled in. 1. An example of the attribute id: if a pdf has a "name" field, then it might have an attribute_id of 1 associated with it that will always be the "name" field on that specific pdf. 3. So for every 1 form submission, there will be multiple entries in the data table, since for a given form there might be ~20-30 different, distinct fields 4. The fields are differentiated with an attribute_id The issue I am having is that I need to create a view which aggregates data based on each submission and am struggling to write queries that don't use subqueries to achieve this. The version of MySQL (5.1) that I am working with does not allow subqueries when creating views. Each row in the view should correspond to 1 unique submission_id and contain all the attribute_ids and their data as columns. So an entry for the first submission will have the submission_id, and attributes 1 through x (where x is there number of fields in that specific pdf/form) as column data. Instead of showing the attribute_id, the columns will just use the common name for that attribute(email, name, etc). I can successfully build a query which does this with the following:

 SELECT submission_id, attribute_id, is_reporting, email FROM ( SELECT form_submission_id, attribute_id, response as is_reporting FROM frm_form_attribute_data WHERE attribute_id = 382 ) isReporting INNER JOIN ( SELECT form_submission_id, attribute_id, response as email FROM frm_form_attribute_data WHERE attribute_id = 385 ) email ON isReporting.form_submission_id = email.form_submission_id Each INNER JOIN grabs the data for a specific attribute using a subquery and appends it as a column in the “view”. However, the subqueries in the FROM and INNER JOIN are not allowed in versions of MySQL < 5.7. I saw as an alternative that I can create a view for each sub query, but some of the forms have 30+ fields and I would have to create a separate view for each one.

Is there a way to structure the query without using subqueries? Or will I need to create all the smaller views corresponding to a specific attribute_id. Please bear in mind that I am unable to change the version of MySQL we are running :( There are migrations in progress but that is outside the scope for my role and team. EXAMPLES of tables: Table: frm_form_attribute_data | id | submission_id | attribute_id | response | | -------- | -------------- | ------------ | -------- | | 1 | 20 | 1 | yes | | 2 | 20 | 2 | test@email.com | | 3 | 21 | 1 | no | | 4 | 21 | 2 | NULL | Table: Desired View | id | submission_id | isReporting | email | | -------- | -------------- | ------------ | -------------------- | | 1 | 20 | yes | test@email.com | | 2 | 21 | no | NULL |
    horsepotatoe (19 rep)
    Jun 4, 2024, 09:06 PM • Last activity: Jun 5, 2024, 09:05 PM
    0 votes
    2 answers
    3275 views
    How to replicate particular table in MySQL master-slave configuration?
    How do I replicate a particular table in a MySQL master-slave configuration? I have tried ```replicate-do-table``` and ```replicate-wild-do-table```. I tried in ```my.ini``` file; is there any alternative to replicate without using ```replicate-do-table``` and ```replicate-wild-do-table```? I need a...
    How do I replicate a particular table in a MySQL master-slave configuration? I have tried
    -do-table
    and
    -wild-do-table
    . I tried in
    .ini
    file; is there any alternative to replicate without using
    -do-table
    and
    -wild-do-table
    ? I need a binary log statement to replicate a single table.
    KIran Kumar Buddala (1 rep)
    May 31, 2019, 10:10 AM • Last activity: May 31, 2023, 12:12 PM
    0 votes
    1 answers
    837 views
    Can I roll back only MySQL?
    I took all database backups before starting patching. We want to update MySQL 5.6.30 to 5.6.32. If we want to roll back to 5.6.30 due to some issues, can we do that? Would we have to roll back the entire server? Or is there any way just to roll back MySQL?
    I took all database backups before starting patching. We want to update MySQL 5.6.30 to 5.6.32. If we want to roll back to 5.6.30 due to some issues, can we do that? Would we have to roll back the entire server? Or is there any way just to roll back MySQL?
    jnoz (1 rep)
    Dec 7, 2016, 09:24 PM • Last activity: Mar 5, 2023, 11:16 AM
    6 votes
    2 answers
    10876 views
    MySQL ERROR 1137 (HY000) at line 9: Can't reopen table: 'temp_table'
    I'm receiving an error `ERROR 1137 (HY000) at line 9: Can't reopen table: 'temp_table'` when executing a query similar to the following: USE database_name; CREATE TEMPORARY TABLE temp_table (status varchar(20)); INSERT INTO temp_table (status) SELECT status FROM client_contractor_status WHERE type =...
    I'm receiving an error ERROR 1137 (HY000) at line 9: Can't reopen table: 'temp_table' when executing a query similar to the following: USE database_name; CREATE TEMPORARY TABLE temp_table (status varchar(20)); INSERT INTO temp_table (status) SELECT status FROM client_contractor_status WHERE type = 'process'; SELECT table1.col1, table1.status, (SELECT COUNT(*) FROM table2 RIGHT OUTER JOIN temp_table ON table2.status = temp_table.status WHERE table2.col1 = table1.col1 ) AS counter FROM table1 RIGHT OUTER JOIN temp_table ON table1.status = temp_table.status I'm (just about) aware of the limitation that says you can't access a temporary table by two different aliases, however I haven't aliased it at all. Is it being aliased automagically because it's in a sub-query? And if so, how can this be fixed? Thanks.
    Sean Airey (235 rep)
    Jun 26, 2013, 09:59 AM • Last activity: Feb 2, 2023, 07:37 AM
    Showing page 1 of 20 total questions