Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
2819
views
MariaDB why do I get error "Unknown table x in field list" when it does exist (yes, another one!)
*Using mariadb 10.4.13-MariaDB but I get the same error in a copy of the database running under 10.3.* The important part of the schema shows /*Database: mamlocal*/ ---------------------- /*Table Information*/ --------------------- Name Engine Version Row_format Rows Avg_row_length Data_length Max_d...
*Using mariadb 10.4.13-MariaDB but I get the same error in a copy of the database running under 10.3.*
The important part of the schema shows
/*Database: mamlocal*/
----------------------
/*Table Information*/
---------------------
Name Engine Version Row_format Rows Avg_row_length Data_length Max_data_length Index_length Data_free Auto_increment Create_time Update_time Check_time Collation Checksum Create_options Comment Max_index_length Temporary
------------------------------------ ------ ------- ---------- ------ -------------- ----------- --------------- ------------ --------- -------------- ------------------- ------------------- ---------- ----------------- -------- -------------- ----------------------------------------------------------- ------------------ -----------
...
member InnoDB 10 Dynamic 706 162 114688 0 49152 0 4528 2020-08-17 14:48:55 2021-04-06 10:27:53 (NULL) utf8_general_ci (NULL) 0 N
...
and the table member is created using...
CREATE TABLE member (
member_id int(11) NOT NULL AUTO_INCREMENT,
member_title varchar(10) DEFAULT NULL,
member_fn_1 varchar(50) DEFAULT NULL,
member_sn varchar(50) DEFAULT NULL,
... other fields
PRIMARY KEY (member_id)
)
These statements each produce the expected results.
SELECT * FROM information_schema.tables WHERE table_schema = 'mamlocal' AND table_name = 'member';
SELECT * FROM information_schema.tables WHERE table_name = 'member';
SELECT member_id FROM member
SELECT member_id FROM mamlocal.member
but these produce
Error Code: 1109 Unknown table 'member' in field list
UPDATE member SET member_id = member_id ;
UPDATE mamlocal.member SET member_id = member_id
*(The real update does something useful but I simplified it to try to find out where the error was)*
Can anyone help me to identify why?
I have looked at these other questions but still cannot see what is wrong
Qn1
Qn2
Qn3
Here is an SQLfiddle showing the same code working correctly in MySQL, as I would expect and I have used similar code in many other places without problems to update table member. Using the same code to update another table works ok.
**More experiments showing that table member is there.**
This code...
INSERT INTO member () VALUES ();
UPDATE member SET member_id = member_id ;
...
gives me the message
2 queries executed, 1 success, 1 errors, 0 warnings
Query: insert into member () values ()
1 row(s) affected
Execution Time : 0.002 sec
Transfer Time : 0.001 sec
Total Time : 0.004 sec
-----------------------------------------------------------
Query: UPDATE member SET member_id = member_id
Error Code: 1109
Unknown table 'member' in field list
Execution Time : 0 sec
Transfer Time : 0 sec
Total Time : 0.001 sec
user2834566
(209 rep)
Apr 6, 2021, 10:20 AM
• Last activity: Jul 31, 2025, 01:08 PM
2
votes
2
answers
3593
views
How to update primary key table based on foreign key with ON DUPLICATE KEY UPDATE
I have a users table with autoincrement `id` as primary key, and unique key on `email`. The `email` and `name` fields are from facebook. +----+-------+------+ | id | email | name | +----+-------+------+ And a facebook users table, +--------+----------+ | userId | fbUserId | +--------+----------+ The...
I have a users table with autoincrement
id
as primary key, and unique key on email
. The email
and name
fields are from facebook.
+----+-------+------+
| id | email | name |
+----+-------+------+
And a facebook users table,
+--------+----------+
| userId | fbUserId |
+--------+----------+
The userId
in the facebook table will be a foreign key value, referencing id
in the users
table. The fbUserId
is guaranteed to be unique by facebook. The reason I split these tables up is I plan to have more social logins in the future, and I'm trying make my schema future proof. Each additional table would also have a userId
, linking it back to users
.
Right now my insert update query is this
BEGIN;
INSERT IGNORE INTO users
(id,email,name)
VALUES (0,?,?);
INSERT IGNORE INTO users_facebook
(userId, fbUserId)
VALUES (LAST_INSERT_ID(), ?);
COMMIT
From what i understand, ON DUPLICATE KEY UPDATE
only applies to a single row in a single table. What i'd like to do is if email
or name
is different for an existing fbUserId
, i'd like to update the changed fields. How would i do this?
Also, any thoughts on my current query? I'm quite new to mysql.
Eric Guan
(151 rep)
May 31, 2017, 08:41 PM
• Last activity: Jul 30, 2025, 08:07 PM
0
votes
1
answers
3405
views
PostgreSQL update column in one table with value from another, based on key from first table
Sorry about the title, I wasn't sure how best to word this. My situation is this: I have two tables: paper_author [paper_id, author_id, author_name] authors [author_id, author_name] Right now, the `author_name` column in `paper_author` is `NULL`. I would like to select the author name from the `auth...
Sorry about the title, I wasn't sure how best to word this. My situation is this:
I have two tables:
paper_author [paper_id, author_id, author_name]
authors [author_id, author_name]
Right now, the
author_name
column in paper_author
is NULL
. I would like to select the author name from the authors
table and insert it into the author_name
column in the paper_author
table. I've written the following query:
UPDATE paper_author pa
SET author_name = (SELECT author_name FROM authors a WHERE a.author_id = pa.author_id);
Which I think will do what I want, but this is taking a very long time to run (days). For reference, the table paper_author
has ~900M rows and the table authors
has ~200M rows. The author_id
is the primary key in the authors
table. author_id
has an index in the table paper_author
. The rows in paper_author
are not unique on author_id
(i.e. each author_id
may appear multiple times associated with different papers).
Is there a more efficient way to write this? Have I missed something?
Gabriel Getzie
(1 rep)
Oct 6, 2020, 02:07 AM
• Last activity: Jul 30, 2025, 12:04 AM
0
votes
1
answers
509
views
Simple Update Join much slower than it should be (MYSQL)
This is a simple Update Join that updates only about 100 rows: Update A INNER JOIN B using(id) SET A.active = 1 WHERE A.date > '2020' This takes about 30 seconds to run, despite the fact that: - This query updates the same 100 rows and takes milliseconds to run: `Update A SET active = 1 WHERE date >...
This is a simple Update Join that updates only about 100 rows:
Update A INNER JOIN B using(id) SET A.active = 1 WHERE A.date > '2020'
This takes about 30 seconds to run, despite the fact that:
- This query updates the same 100 rows and takes milliseconds to run:
Update A SET active = 1 WHERE date > '2020'
- The join condition is fast, this query does the same join and takes less than a second
SELECT * FROM A INNER JOIN B using(id) WHERE A.date > '2020'
- The field active
not part of any index
- Table A has an index on (id, date), and table B has an index on id.
I tried putting the where condition in the join (using on date > '2020') but it didn't help. I'm absolutely stumped why this takes so long. Any help is appreciated.
Tod
(1 rep)
Sep 1, 2020, 05:00 AM
• Last activity: Jul 28, 2025, 07:01 AM
0
votes
1
answers
158
views
Is UPDATE FROM still not in the ANSI standard?
[This article](https://sqlserverfast.com/blog/hugo/2008/03/lets-deprecate-update-from/) fascinated me it claims that `FROM` clauses in `UPDATE` are not part of the ANSI standard. Unfortunately, it is very old and I don't care to buy a copy of the latest ANSI standard for SQL. Is it still the case th...
[This article](https://sqlserverfast.com/blog/hugo/2008/03/lets-deprecate-update-from/) fascinated me it claims that
FROM
clauses in UPDATE
are not part of the ANSI standard. Unfortunately, it is very old and I don't care to buy a copy of the latest ANSI standard for SQL. Is it still the case that FROM
clauses in UPDATE
are not part of the ANSI standard?
J. Mini
(1237 rep)
Nov 11, 2023, 09:57 PM
• Last activity: Jul 21, 2025, 11:02 AM
1
votes
2
answers
143
views
How to leave only the rows with an timestamp difference with an interval greater than a parameter
I have a Postgres table with timestamps as entries. [![enter image description here][1]][1] From this table, I would like to calculate a new one where there are no consecutive entries with a timestamp difference shorter than 400 milliseconds. So in the case of the image, from the first 10 rows I wou...
I have a Postgres table with timestamps as entries.
From this table, I would like to calculate a new one where there are no consecutive entries with a timestamp difference shorter than 400 milliseconds. So in the case of the image, from the first 10 rows I would only leave [1,5,9]
I tried with joins, but I realised I would need the updated table before calculating the ON clause of posterior rows, because I would need to know which rows have already been deleted.
Edit:
I tried the following join to at least have an idea of the tokens I would like to delete:
Here, I see that the next token after the first one which follows the condition, is the 5th. So I would like to delete 2,3,4. Then, the next which is 400 ms delayed from the 5th is the 9th, so I would like to delete 6,7,8.
Thanks in advance

select distinct on (s.token)s.token as token1, s.timestamp as tm1, s2.token as token2, s2.timestamp as tm2
from temporal.samples s
join temporal.samples s2
on s2.timestamp>s.timestamp + interval '400000 microseconds')
Giving this result:

eddie_jung
(11 rep)
Jun 10, 2023, 10:56 AM
• Last activity: Jul 20, 2025, 08:04 PM
2
votes
1
answers
161
views
find 16,000 words and replace with ##
I need to do a find a replace but rather than find one word to replace with another I have about 16k words, so as an example; table 1 has two words apple,orange - table 2 has the contents of a book, i want to be able to replace all the words apple and orange that exists in table 2 and replace them w...
I need to do a find a replace but rather than find one word to replace with another I have about 16k words, so as an example; table 1 has two words apple,orange - table 2 has the contents of a book, i want to be able to replace all the words apple and orange that exists in table 2 and replace them with ##, is there a query that does this ? my actual table 1 has about 16k words and table 2 has 8 million words. most of the examples on google suggests i type the words to find but that is not possible when i have 16k words I am using mysql workbench and an absolute beginner
(from comment) table 1 has 16k names in a column - table 2 has emails and text in columns - i want to be able to censor out all the names in table 2 using the names in table one as a reference
MH731Z
(21 rep)
Dec 15, 2018, 09:33 PM
• Last activity: Jul 17, 2025, 12:06 PM
1
votes
1
answers
1672
views
UPDATE random rows
I'm working on creating a more efficient way to UPDATE random rows in a ticket allocation website we have. Currently, we pre populate the tickets table with ticket numbers in subsequent order, i.e 1,2,4 etc. Each ticket number being a row. When a customer then places their order we use the following...
I'm working on creating a more efficient way to UPDATE random rows in a ticket allocation website we have.
Currently, we pre populate the tickets table with ticket numbers in subsequent order, i.e 1,2,4 etc. Each ticket number being a row.
When a customer then places their order we use the following SQL query to reserve their random tickets.
UPDATE tickets
SET order_item_id = X
WHERE lottery_id = X
AND order_item_id IS NULL
ORDER BY RAND()
LIMIT n
To begin with the query above worked fine but as the table has grown we're now experiencing performance issues.
The concept I have in mind is to change the population of the tickets table to populate the tickets table in a random order and then use the primary ID key in the tickets table instead to order by like so:
UPDATE tickets
SET order_item_id = X
WHERE lottery_id = X
AND order_item_id IS NULL
ORDER BY id
LIMIT n
My question is, how much more efficient is this method compared to the RAND() function or is there any better ways of doing what I'm trying to achieve?
**UPDATE**
Because the ticket numbers are already pre populated in the tickets table at random when a competition is launched, i'm thinking the below could be a better solution?
UPDATE tickets
SET order_item_id = X
WHERE lottery_id = '12102'
AND order_item_id IS NULL
AND id IN (SELECT id
FROM tickets
WHERE lottery_id = '12102'
AND order_item_id IS NULL
)
ORDER BY id
LIMIT 3
Example of tickets table after tickets have been pre populated.
| id | ticket_number | order_item_id
| --- | --- |---|
| 163895 | 1 | NULL
| 163896 | 500 | NULL
| 163897 | 1221 | NULL
| 163898 | 20 | NULL
| 163899 | 403 | NULL
| 163900 | 8650 | NULL
| 163901 | 2456 | NULL
Ideally, it would be good it the first person who ordered 1 ticket got ticket number 1. And then if the next customer orders 2 tickets, they will get 500 and 1221
**ANOTHER UPDATE**
After some experimenting i found this query to work quite well but would welcome some thoughts!
UPDATE tickets
SET order_item_id = '120'
WHERE lottery_id = '12094'
AND order_item_id IS NULL
AND id >= (SELECT RAND())
LIMIT 3
Shaun Lippitt
(11 rep)
Mar 4, 2021, 01:16 PM
• Last activity: Jul 11, 2025, 06:01 PM
0
votes
1
answers
180
views
Is it more efficient to do SELECT and compare in Java or DELETE and INSERT
I am hitting a **REST API** to get data from a service. I transform this data and store it in a database. I will have to do this on some interval, 15 minutes, and then make sure this database has latest information. I am doing this in a Java program. I am wondering if it would be better, after I hav...
I am hitting a **REST API** to get data from a service. I transform this data and store it in a database. I will have to do this on some interval, 15 minutes, and then make sure this database has latest information.
I am doing this in a Java program. I am wondering if it would be better, after I have queried all data, to do:
1. **SELECT** statements and compare vs transformed data and do **UPDATEs** (**DELETE** all associated records to what was changed and **INSERT** new)
**OR**
2. **DELETE ALL** and **INSERT ALL** every time.
Option 1 has potential to have a lot less transactions, guaranteed SELECT on all records because we are comparing, but potentially not a lot of UPDATEs since I don't expect data to be changing much. But it has downside of doing comparisons on all records to detect a change.
I am planning on doing this using Spring Boot, JPA layer and possibly Postgres
user137760
(1 rep)
Nov 1, 2017, 06:02 PM
• Last activity: Jul 10, 2025, 01:16 PM
0
votes
1
answers
88
views
Conditional update
I would like to create a conditional update on an audit table, such that if the update on the main table succeeds, then the update on the audit table is performed. Right now I have this ``` CREATE DEFINER=`root`@`localhost` PROCEDURE `updateParent`( IN inId INT(10), IN inName varchar(75), IN inEmail...
I would like to create a conditional update on an audit table, such that if the update on the main table succeeds, then the update on the audit table is performed.
Right now I have this
CREATE DEFINER=root
@localhost
PROCEDURE updateParent
(
IN inId INT(10),
IN inName varchar(75),
IN inEmail varchar(50),
IN inPhone varchar(8),
IN inUsername varchar(50)
)
BEGIN
SET AUTOCOMMIT = 0;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
START TRANSACTION;
update parent
set nameOfPerson = inName
,email = inEmail
,phonenr = inPhone
where id = inId;
COMMIT;
SET @rowcount = ROW_COUNT();
IF @rowcount > 0 THEN
INSERT INTO parent_h
(parent_id
, nameOfPerson
, email
, phonenr
, opts
, event
, username
)
SELECT id, nameOfPerson, email, phonenr, ts, "U", inUsername
from parent
where id = inId;
END IF;
COMMIT;
END
Occasionally, if the update is performed and zero rows are updated (no data was changed), then I do not want the audit table update to be performed.
Are there any disadvantages of doing it this way? and are there any better ways? This is a web-application.
UPDATE:
I tried with an AFTER_UPDATE trigger. It seems like MySQL runs the trigger even if zero rows are updated on the table. That means I need to test each field to determine if any actual changes were made.
CREATE DEFINER=root
@localhost
TRIGGER parent_AFTER_UPDATE
AFTER UPDATE ON parent
FOR EACH ROW BEGIN
IF (
OLD.id NEW.id or
OLD.nameOfPerson NEW.nameOfPerson or
OLD.email NEW.email or
OLD.phonenr NEW.phonenr
)
THEN
INSERT INTO parent_h
(
parent_id
,
nameOfPerson
,
email
,
phonenr
,
opts
,
event
,
username
)
VALUES
(
NEW.id,
NEW.nameOfPerson,
NEW.email,
NEW.phonenr,
NEW.ts,
'U',
inUsername
)
;
END IF;
END
An additional issue is that I need to add the userid of the principal performing the update, which is not available in the table update.
tcelvis
(1 rep)
Jun 16, 2024, 05:28 PM
• Last activity: Jul 7, 2025, 07:12 AM
0
votes
2
answers
162
views
Update a table with data from another table
I have a Wordpress table in which I want to take the user_email from **Table2** and import it to contact_email on **Table1** one based off of the user_login. user_login and user_id equal the same value. Nothing I have tried has worked. Any thoughts? Table1 ----------------------------------- user_id...
I have a Wordpress table in which I want to take the user_email from **Table2** and import it to contact_email on **Table1** one based off of the user_login. user_login and user_id equal the same value. Nothing I have tried has worked. Any thoughts?
Table1
-----------------------------------
user_id|contact_email |contact_name
=======================================
123 |test@test.com |deft
124 |test3@test.com|deft3
Table2 (User table) ----------------------------------- user_login|user_email |display_name ======================================= 123 |test@test.com |deft 124 |test3@test.com |deft3 I have tried: UPDATE Table1 SET contact_email = (SELECT Table2.user_email FROM Table2 WHERE Table2.user_login = user_id ) WHERE EXISTS (SELECT Table2.user_email FROM Table2 WHERE Table2.user_login = user_id );
Table2 (User table) ----------------------------------- user_login|user_email |display_name ======================================= 123 |test@test.com |deft 124 |test3@test.com |deft3 I have tried: UPDATE Table1 SET contact_email = (SELECT Table2.user_email FROM Table2 WHERE Table2.user_login = user_id ) WHERE EXISTS (SELECT Table2.user_email FROM Table2 WHERE Table2.user_login = user_id );
Deft Tech
(1 rep)
Mar 30, 2018, 05:36 PM
• Last activity: Jul 7, 2025, 07:00 AM
1
votes
1
answers
175
views
An SQL update query works fine unless it is invoked by an SSRS subscription. Why?
A stored procedure that updates data in production data tables was written by a very talented staff member (with developer-only rights) and works fine when executed in SSMS. However, the same SP does not run when it is invoked by an SSRS subscription. The subscription was created by and is owned by...
A stored procedure that updates data in production data tables was written by a very talented staff member (with developer-only rights) and works fine when executed in SSMS. However, the same SP does not run when it is invoked by an SSRS subscription. The subscription was created by and is owned by a manager with System Administrator rights. If this problem is caused by SSRS not recognizing the proper credentials (whatever they may be), how can we change the subscription so that it works? If it's not caused by incorrect authentication or authorization, what might the cause be?
Thank you for your responses; they are very much appreciated.
The "very talented" reference is a (not-so-obvious) joke and self-reference. SQL SMSS is version 2014, SQL DB Server is 2005, and the SSRS server is...2003.
We use the subscription to perform a weekly update to customer records, changing/increasing their credit limit and terms if they are in good standing.
The SP and report works fine when invoked directly by either the author or the manager (administrator). But the subscription, created by the manager, does not perform any update in the report's SP.
So the question might be, "will a subscription created by the developer work?"
Or, "does the authentication to the report's data-source need to be changed?"
Or, "how can the SP be changed to permit its invocation from the SSRS subscription?"
Or, "if the subscription is using the credentials of the SSRS service, how do we change it?"
Rocket J
(11 rep)
Mar 27, 2017, 04:24 PM
• Last activity: Jul 3, 2025, 08:00 PM
0
votes
1
answers
782
views
How to optimize UPDATE with a nested SELECT subquery?
I wrote a complicated UPDATE query, and it works, but it looks menacing. Here's what I'm trying to do: In each topic user 'Bob123' posted anonymously. When you post anonymously in a topic, you get a unique anonymous index for that topic. Say I want to merge two topics together. Bob123 has a differen...
I wrote a complicated UPDATE query, and it works, but it looks menacing. Here's what I'm trying to do:
In each topic user 'Bob123' posted anonymously. When you post anonymously in a topic, you get a unique anonymous index for that topic.
Say I want to merge two topics together. Bob123 has a different anon index in both topics, so his unique anon index wouldn't be unique. I only have two pieces of data to work with: $topic_id, the topic id you are merging into, and $post_id_list, all the post ids that got merged over.
I want to update all anonymous_index entries per each distinct poster_id's post in that topic. This anonymous_index needs to be the original index they had in the topic before the other topic was merged into it.
The first SELECT query first selects the anon indices of the moved posts.
The outer SELECT query gets the first non-merged post's anon index (if it is
> 0
) of those merged posters in the topic and selects a merged anon index from the first query.
Then, I update it. Wherever the anon index of those posters in that topic doesn't equal the old index, I update it.
Is there something simple that I'm missing here? I don't like the fact that I have a subquery in a subquery.
At first I was using HAVING MIN(anonymous_index) MAX(anonymous_index)
along with AND post_id NOT IN ($merged_post_list)
to select the poster id list that needed to be updated and an unmerged anon index, but it returned 0 rows with this. If the merged post is BEFORE all original posts (and has a larger anon index), then the minimum anon index will match the maximum index for that poster. So making another subquery fixed this...
$merged_post_list = implode(',', $post_id_list);
...
UPDATE " . POSTS_TABLE . " AS p
INNER JOIN ( SELECT p.post_id, p.anonymous_index AS old_index,
merged.poster_id, merged.anonymous_index AS new_index
FROM " . POSTS_TABLE . " AS p,
( SELECT poster_id, anonymous_index
FROM " . POSTS_TABLE . "
WHERE post_id IN ($merged_post_list)
AND topic_id = $topic_id
AND anonymous_index > 0
) AS merged
WHERE p.post_id NOT IN ($merged_post_list)
AND p.topic_id = $topic_id
AND p.anonymous_index > 0
AND p.poster_id = merged.poster_id
GROUP BY merged.poster_id
) AS postdata
SET p.anonymous_index = postdata.old_index
WHERE p.topic_id = $topic_id
AND anonymous_index > 0
AND anonymous_index postdata.old_index
AND p.poster_id = postdata.poster_id
post_id is the primary index, poster_id and topic_id are also indices.
Here's some sample behavior:
Before merge:
|post_id_____poster_id_____anonymous_index|
| 11 | 3 | 2 |
| 12 | 22 | 1 |
| 14 | 22 | 1 |
| 15 | 3 | 2 |
After merge:
|post_id_____poster_id_____anonymous_index|
| 10 | 22 | 4 |
| 11 | 3 | 2 |
| 12 | 22 | 1 |
| 13 | 3 | 4 |
| 14 | 22 | 1 |
| 15 | 3 | 2 |
| 16 | 22 | 4 |
After UPDATE (the above query):
|post_id_____poster_id_____anonymous_index|
| 10 | 22 | 1 |
| 11 | 3 | 2 |
| 12 | 22 | 1 |
| 13 | 3 | 2 |
| 14 | 22 | 1 |
| 15 | 3 | 2 |
| 16 | 22 | 1 |
EDIT: I've made the following index and an alternative SELECT query to avoid having two subqueries, how would these fare?:
(topic_id, poster_id, anonymous_index, post_id)
SELECT merged.poster_id, merged.anonymous_index AS new_index,
old.post_id, old.anonymous_index AS old_index
FROM " . POSTS_TABLE . " AS merged,
" . POSTS_TABLE . " AS old
WHERE merged.post_id IN ($post_list)
AND merged.anonymous_index > 0
AND merged.anonymous_index old.anonymous_index
AND old.topic_id = $topic_id
AND old.post_id NOT IN ($post_list)
AND old.anonymous_index > 0
AND old.poster_id = merged.poster_id
GROUP BY merged.poster_id
ORDER BY NULL
EDIT AGIAN:
Here is my schema:
Table structure for table phpbb_posts
--
CREATE TABLE phpbb_posts
(
post_id
int(10) UNSIGNED NOT NULL,
topic_id
int(10) UNSIGNED NOT NULL DEFAULT '0',
forum_id
mediumint(8) UNSIGNED NOT NULL DEFAULT '0',
poster_id
int(10) UNSIGNED NOT NULL DEFAULT '0',
icon_id
mediumint(8) UNSIGNED NOT NULL DEFAULT '0',
poster_ip
varchar(40) COLLATE utf8_bin NOT NULL DEFAULT '',
post_time
int(11) UNSIGNED NOT NULL DEFAULT '0',
post_reported
tinyint(1) UNSIGNED NOT NULL DEFAULT '0',
enable_bbcode
tinyint(1) UNSIGNED NOT NULL DEFAULT '1',
enable_smilies
tinyint(1) UNSIGNED NOT NULL DEFAULT '1',
enable_magic_url
tinyint(1) UNSIGNED NOT NULL DEFAULT '1',
enable_sig
tinyint(1) UNSIGNED NOT NULL DEFAULT '1',
post_username
varchar(255) COLLATE utf8_bin NOT NULL DEFAULT '',
post_subject
varchar(255) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL DEFAULT '',
post_text
mediumtext COLLATE utf8_bin NOT NULL,
post_checksum
varchar(32) COLLATE utf8_bin NOT NULL DEFAULT '',
post_attachment
tinyint(1) UNSIGNED NOT NULL DEFAULT '0',
bbcode_bitfield
varchar(255) COLLATE utf8_bin NOT NULL DEFAULT '',
bbcode_uid
varchar(8) COLLATE utf8_bin NOT NULL DEFAULT '',
post_postcount
tinyint(1) UNSIGNED NOT NULL DEFAULT '1',
post_edit_time
int(11) UNSIGNED NOT NULL DEFAULT '0',
post_edit_reason
varchar(255) COLLATE utf8_bin NOT NULL DEFAULT '',
post_edit_user
int(10) UNSIGNED NOT NULL DEFAULT '0',
post_edit_count
smallint(4) UNSIGNED NOT NULL DEFAULT '0',
post_edit_locked
tinyint(1) UNSIGNED NOT NULL DEFAULT '0',
post_visibility
tinyint(3) NOT NULL DEFAULT '0',
post_delete_time
int(11) UNSIGNED NOT NULL DEFAULT '0',
post_delete_reason
varchar(255) COLLATE utf8_bin NOT NULL DEFAULT '',
post_delete_user
int(10) UNSIGNED NOT NULL DEFAULT '0',
sfs_reported
tinyint(1) UNSIGNED NOT NULL DEFAULT '0',
parent_id
int(10) UNSIGNED DEFAULT '0',
post_depth
int(3) UNSIGNED NOT NULL DEFAULT '0',
is_anonymous
tinyint(1) UNSIGNED NOT NULL DEFAULT '0',
anonymous_index
mediumint(8) UNSIGNED NOT NULL DEFAULT '0'
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
--
-- Indexes for dumped tables
--
--
-- Indexes for table phpbb_posts
--
ALTER TABLE phpbb_posts
ADD PRIMARY KEY (post_id
),
ADD KEY forum_id
(forum_id
),
ADD KEY topic_id
(topic_id
),
ADD KEY poster_ip
(poster_ip
),
ADD KEY poster_id
(poster_id
),
ADD KEY post_username
(post_username
),
ADD KEY tid_post_time
(topic_id
,post_time
),
ADD KEY post_visibility
(post_visibility
),
ADD KEY parent_id
(parent_id
);
--
-- AUTO_INCREMENT for dumped tables
--
--
-- AUTO_INCREMENT for table phpbb_posts
--
ALTER TABLE phpbb_posts
MODIFY post_id
int(10) UNSIGNED NOT NULL AUTO_INCREMENT;COMMIT;
Thrash Tech
(1 rep)
Jan 16, 2019, 01:49 AM
• Last activity: Jul 3, 2025, 10:04 AM
2
votes
1
answers
179
views
Regularly updating table with huge amount of rows
I have a table containing (in addition to some metadata) a score and a boolean flag: CREATE TABLE scores ( score_id SERIAL NOT NULL CONSTRAINT score_pkey PRIMARY KEY, user_id INTEGER NOT NULL, score DOUBLE PRECISION NOT NULL, flag BOOLEAN NOT NULL ); There are multiple indexes for querying the data:...
I have a table containing (in addition to some metadata) a score and a boolean flag:
CREATE TABLE scores
(
score_id SERIAL NOT NULL
CONSTRAINT score_pkey
PRIMARY KEY,
user_id INTEGER NOT NULL,
score DOUBLE PRECISION NOT NULL,
flag BOOLEAN NOT NULL
);
There are multiple indexes for querying the data:
CREATE UNIQUE INDEX score_pkey ON scores (score_id);
CREATE INDEX ix_scores_user_id ON scores (user_id);
CREATE INDEX ix_scores_score_desc ON scores (score desc);
CREATE INDEX ix_scores_flag ON scores (score_id) WHERE (flag = true);
The table currently contains around 120 million rows and for most of them, the
score
value is updated once a day. The flag
column may be toggled (only from default value false
to true
but then never back to false
) at any point of the day and independently from the score updates.
Scores are generally only updated for some users (around half to one third currently) and where the flag is set to false
.
The scores are calculated on a worker machine and updated in batches (ca 3000 at a time) in a single update-query. There is no way of calculating the data on the database server. The query looks like this:
UPDATE scores as s
SET
score = tmp.score
FROM (
VALUES (32373477, 0.5566874822853446), (32373478, 0.5243741268418393)
) AS tmp(id, score)
WHERE tmp.id = s.score_id
The huge amount of updates causes some issues with our database as the updates require a lot of disk I/O (index updates and column rewrites), increases the queue depth and thus slowing down other queries.
My primary goal is to reduce the I/O writes, speed up the update process and reduce load on the database.
What are my options here?
It is possible for me to move the flag
column to another table and do a complete rewrite of the score
table. May this be more performant than updating only a selected amount of rows (~one third is updated each day)?
Alternatively, is writing all changed rows to a separate, index-less table and updating the primary table in-database a preferable solution?
Does anybody have some experience with and solutions for a similar kind of problem they are able to share?
The database server is a m4.large (2 cores, 8 GiB memory, 250 GiB storage, 750 IOPS) PostgreSQL 9.6.6 instance.
Birne94
(371 rep)
Nov 1, 2018, 10:46 AM
• Last activity: Jul 1, 2025, 10:07 PM
2
votes
2
answers
40486
views
Data too long for column MYSQL Error
I've been pouring over threads regarding this issue, but ours seems unique in that it's not an issue with the query but the row itself; or at least I couldn't find a similar topic addressing how to fix the row. A simple query like this: > update customer set customer_name = 'Health Net of CA' where...
I've been pouring over threads regarding this issue, but ours seems unique in that it's not an issue with the query but the row itself; or at least I couldn't find a similar topic addressing how to fix the row.
A simple query like this:
> update customer set customer_name = 'Health Net of CA' where customer_id = '15484';
Results in:
ERROR 1406 (22001): Data too long for column 'customer_name' at row 1
Checking the character length:
mysql> select char_length(customer_name) from customer where customer_id = '15484';
+----------------------------+
| char_length(customer_name) |
+----------------------------+
| 54 |
+----------------------------+
1 row in set (0.00 sec)
Describe shows:
| customer_name | varchar(255) | YES | | NULL | |
This database was populated using an import. I'm fairly sure strict mode was on but I didn't handle the import myself so I can't say for certain. This table has 39 columns, and most are Varchar(50) or tinyint so it shouldn't be an issue with the row being too big.
Any suggestions on fixing these bad rows?
Update:
SHOW CREATE TABLE customer;
| customer | CREATE TABLE
customer
(
customer_id
int(11) NOT NULL AUTO_INCREMENT,
customer_name
varchar(255) DEFAULT NULL,
customer_code
varchar(12) DEFAULT NULL,
customer_type
text,
bill_type
text,
attention
varchar(100) DEFAULT NULL COMMENT 'Attention to who we deal biz with',
address_id
int(11) DEFAULT NULL,
b_bad_debt
tinyint(1) DEFAULT '0',
b_fee_approval
tinyint(1) DEFAULT NULL COMMENT 'boolean flag for Fee Approval 1=set, 0=unset',
approval_amount
decimal(5,2) DEFAULT NULL,
notification
varchar(45) DEFAULT NULL COMMENT 'notified customer by email / fax or ftp',
b_tax_exempt
tinyint(1) DEFAULT '0' COMMENT 'Tax Exempt Flag',
sales_tax_number
varchar(20) DEFAULT NULL COMMENT 'sales tax/ permit no.',
b_prepay
tinyint(1) DEFAULT NULL,
create_date
datetime DEFAULT NULL,
last_updated
timestamp NULL DEFAULT NULL,
active
tinyint(4) DEFAULT '1',
created_by
varchar(45) DEFAULT NULL,
updated_by
varchar(45) DEFAULT NULL,
email
varchar(45) DEFAULT NULL,
email_extra
mediumtext COMMENT 'extra emails as per creation',
state_tax_code
varchar(2) DEFAULT NULL COMMENT 'this is as CA if CA state and refer to TAX table',
email_verified_by
varchar(45) DEFAULT 'NA',
fax_verified_by
varchar(45) DEFAULT 'NA',
b_always_send
tinyint(1) DEFAULT '0' COMMENT 'there is customer that we need always send',
b_project_customer
tinyint(1) DEFAULT '0',
b_exception_list
tinyint(1) DEFAULT '0',
b_has_inslist
tinyint(1) DEFAULT '0',
customer_passwrd
varchar(255) DEFAULT NULL,
delivery_opt
varchar(45) DEFAULT NULL,
max_fax_pages
int(11) DEFAULT NULL,
phone
varchar(20) DEFAULT NULL,
fax
varchar(20) DEFAULT NULL,
phone_extra
mediumtext,
phone_second
varchar(20) DEFAULT NULL,
b_multi_suites
tinyint(1) DEFAULT '0',
suite_list
varchar(255) DEFAULT NULL,
b_portal_download
tinyint(1) DEFAULT '0',
b_no_download
tinyint(1) DEFAULT '0',
PRIMARY KEY (customer_id
),
KEY customer_idx
(customer_code
,customer_name
)
) ENGINE=InnoDB AUTO_INCREMENT=18870 DEFAULT CHARSET=utf8 |
RyanH
(153 rep)
Jul 29, 2016, 05:09 PM
• Last activity: Jun 26, 2025, 07:40 AM
2
votes
1
answers
1658
views
Trigger on 2 tables : issue with INSERT and UPDATE
This is related to my previous question: https://dba.stackexchange.com/questions/213913/trigger-to-cascade-updates-to-a-second-table. In my Postgres 9.6 DB, I had 2 triggers on `emprise` (formerly `table1`), supposed to insert or update some fields in `metadonnees` (formerly `table2`). Merged like t...
This is related to my previous question:
https://dba.stackexchange.com/questions/213913/trigger-to-cascade-updates-to-a-second-table .
In my Postgres 9.6 DB, I had 2 triggers on
emprise
(formerly table1
), supposed to insert or update some fields in metadonnees
(formerly table2
). Merged like this now:
CREATE OR REPLACE FUNCTION activite.tbl_emprise_metadonnees()
RETURNS trigger AS
$BODY$
begin
IF (TG_OP = 'INSERT') THEN
insert into activite.metadonnees (gid, date_publication, contributor, code_tranche)
VALUES (new.gid, current_date, current_user, NEW.code_tranche)
;
return new ;
elseif (TG_OP = 'DELETE') THEN
DELETE from activite.metadonnees
where gid = old.gid ;
return old ;
elsif (TG_OP = 'UPDATE') THEN
UPDATE activite.metadonnees
SET (contributor, code_tranche) = (current_user, NEW.code_tranche)
where metadonnees.gid = new.gid
;
return new ;
end if ;
end;
$BODY$
LANGUAGE plpgsql VOLATILE COST 100;
CREATE TRIGGER trg_tbl_emprise_metadonnees
AFTER INSERT OR DELETE OR UPDATE
ON activite.emprise
FOR EACH ROW
EXECUTE PROCEDURE activite.tbl_emprise_metadonnees();
Table definitions:
CREATE TABLE activite.emprise
(gid integer NOT NULL DEFAULT PRIMARY KEY nextval('activite.emprise_gid_seq'::regclass),
surface numeric(9,2),
annee integer,
ro character varying(50),
numope character varying(12),
typope character varying(25),
geom geometry(MultiPolygon,2154),
typemp character varying(30),
nomope character varying(80),
numoa character varying(20),
numprescr character varying(25),
tranche integer DEFAULT 1,
code_tranche character varying(15),
producteur character varying(15),
notice_rapport character varying(50)
CREATE TABLE activite.metadonnees
(gid integer NOT NULL PRIMARY KEY,
date_publication date,
"date_création" date,
"généalogie" character varying(250),
"résolution_spatiale" character varying(5),
responsable character varying(10),
restrictions character varying(100),
source character varying(15),
creator character varying(50),
publisher character varying(80),
identifier character varying(50),
title character varying(80),
subject character varying,
code_tranche character varying(15),
contributor character varying,
dates character varying,
type_donnees character,
format character varying,
language character varying,
coverage character varying
If I use AFTER INSERT OR DELETE OR UPDATE
trigger then when a record is inserted into emprise
, the field contributor
is overwritten by the current_user
value for all records.
If the trigger is limited to AFTER INSERT OR DELETE
then after insertion everything goes well: the field contributor
is not replaced for all records (as desired), just the record concerned in emprise
.
In both cases, the contributor
and code_tranche
are updated if, after insertion, code_tranche
value is changed in table emprise
. I will conclude that UPDATE
is the cause of this dysfunction but I am unable to find why.
Leehan
(205 rep)
Aug 2, 2018, 09:01 PM
• Last activity: Jun 24, 2025, 08:04 AM
0
votes
1
answers
225
views
MySQL update with inner join taking too long
I have the following two tables CREATE TABLE `tbl_products_temp` ( `cd_id` INT(6) UNSIGNED ZEROFILL NOT NULL AUTO_INCREMENT, `cd_structure` MEDIUMBLOB NOT NULL, `cd_smiles` TEXT NULL DEFAULT NULL COLLATE 'latin1_swedish_ci', `cd_formula` VARCHAR(100) NULL DEFAULT NULL COLLATE 'latin1_swedish_ci', `c...
I have the following two tables
CREATE TABLE
tbl_products_temp
(
cd_id
INT(6) UNSIGNED ZEROFILL NOT NULL AUTO_INCREMENT,
cd_structure
MEDIUMBLOB NOT NULL,
cd_smiles
TEXT NULL DEFAULT NULL COLLATE 'latin1_swedish_ci',
cd_formula
VARCHAR(100) NULL DEFAULT NULL COLLATE 'latin1_swedish_ci',
cd_molweight
DOUBLE NULL DEFAULT NULL,
cd_hash
INT(11) NOT NULL,
cd_timestamp
DATETIME NOT NULL,
cd_pre_calculated
TINYINT(1) NOT NULL DEFAULT '0',
iupac_name
VARCHAR(600) NULL DEFAULT NULL COLLATE 'latin1_swedish_ci',
price_currency
VARCHAR(3) NULL DEFAULT NULL COLLATE 'latin1_swedish_ci',
price1
DECIMAL(10,2) NULL DEFAULT NULL,
price2
DECIMAL(10,2) NULL DEFAULT NULL,
price3
DECIMAL(10,2) NULL DEFAULT NULL,
last_update
TIMESTAMP NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(),
deleted_dt
DATETIME NULL DEFAULT NULL,
PRIMARY KEY (cd_id
) USING BTREE,
INDEX tbl_products_temp_hx
(cd_hash
) USING BTREE,
INDEX tbl_products_temp_px
(cd_pre_calculated
) USING BTREE,
INDEX idx_iupac_name
(iupac_name
) USING BTREE,
INDEX idx_formula
(cd_formula
) USING BTREE,
INDEX idx_molweight
(cd_molweight
) USING BTREE,
INDEX idx_smiles
(cd_smiles
(100)) USING BTREE
)
COLLATE='latin1_swedish_ci'
ENGINE=MyIsam
CREATE TABLE tbl_products_temp2
(
cd_id
INT(6) UNSIGNED ZEROFILL NOT NULL AUTO_INCREMENT,
cd_structure
MEDIUMBLOB NOT NULL,
cd_smiles
TEXT NULL DEFAULT NULL COLLATE 'latin1_swedish_ci',
cd_formula
VARCHAR(100) NULL DEFAULT NULL COLLATE 'latin1_swedish_ci',
cd_molweight
DOUBLE NULL DEFAULT NULL,
cd_hash
INT(11) NOT NULL,
cd_timestamp
DATETIME NOT NULL,
cd_pre_calculated
TINYINT(1) NOT NULL DEFAULT '0',
iupac_name
VARCHAR(600) NULL DEFAULT NULL COLLATE 'latin1_swedish_ci',
price_currency
VARCHAR(3) NULL DEFAULT NULL COLLATE 'latin1_swedish_ci',
price1
DECIMAL(10,2) NULL DEFAULT NULL,
price2
DECIMAL(10,2) NULL DEFAULT NULL,
price3
DECIMAL(10,2) NULL DEFAULT NULL,
last_update
TIMESTAMP NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(),
deleted_dt
DATETIME NULL DEFAULT NULL,
PRIMARY KEY (cd_id
) USING BTREE,
INDEX tbl_products_temp_hx
(cd_hash
) USING BTREE,
INDEX tbl_products_temp_px
(cd_pre_calculated
) USING BTREE,
INDEX idx_iupac_name
(iupac_name
) USING BTREE,
INDEX idx_formula
(cd_formula
) USING BTREE,
INDEX idx_molweight
(cd_molweight
) USING BTREE,
INDEX idx_smiles
(cd_smiles
(100)) USING BTREE
)
COLLATE='latin1_swedish_ci'
ENGINE=MyIsam
Table tbl_products_temp has 241633 rows.
Table tbl_products_temp2 has 218380 rows.
I'm trying the following update query but its taking forever.
UPDATE tbl_products_temp
INNER JOIN tbl_products_temp2
ON tbl_products_temp.iupac_name = tbl_products_temp2.iupac_name
SET tbl_products_temp.price1 = tbl_products_temp2.price1,
tbl_products_temp.price2 = tbl_products_temp2.price2,
tbl_products_temp.price3 = tbl_products_temp2.price3,
tbl_products_temp.price_currency = 'EUR'
When I run explain using the following I get:
explain select * from tbl_products_temp
inner join tbl_products_temp2
on tbl_products_temp.iupac_name = tbl_products_temp2.iupac_name
id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra
--------------------------------------------------------------------------------------------------------------------------------------------
1 | SIMPLE | tbl_products_temp2 | ALL | idx_iupac_name | (NULL) | (NULL) | (NULL) | 218380 |
1 | SIMPLE | tbl_products_temp | ref | idx_iupac_name | idx_iupac_name | 603 | mydb.tbl_products_temp2.iupac_name | 1 | Using where
How can I speed up this update query?
adam78
(155 rep)
Nov 7, 2022, 05:19 PM
• Last activity: Jun 16, 2025, 03:06 PM
0
votes
1
answers
206
views
Postgres preventing update on "events" table but insert "warning" event
I'm facing a bit of a problem on some restrictions I'm trying to implement on my postgresql database. My dilemma is as follows: I have an event table that logs basically everything that happens on my app, it can't be updated or deleted directly from the app (only inserts work), but I also need to pr...
I'm facing a bit of a problem on some restrictions I'm trying to implement on my postgresql database.
My dilemma is as follows: I have an event table that logs basically everything that happens on my app, it can't be updated or deleted directly from the app (only inserts work), but I also need to prevent anyone from updating or deleting using a manual query (when conecting from dbeaver for example), I know I can do this by revoking permissions to the table, problem is, at the same time, I need to insert into that table an event that someone tried to manually update it, and when I revoke restrictions its becoming impossible.
I was creating a trigger as follows, and the restriction works but its not inserting anything into my table, could anyone help me out?
--Create event on event table update attempt
CREATE OR REPLACE FUNCTION protect_events_on_update() RETURNS TRIGGER AS
$BODY$
DECLARE
username VARCHAR;
BEGIN
-- Get special variable values
SELECT current_user INTO username;
INSERT INTO events (uuid,description) VALUES (someUUID, username || 'tried to modify the table');
RETURN NEW;
END;
$BODY$
language plpgsql;
CREATE TRIGGER protect_events_on_update_trigg BEFORE UPDATE ON events FOR EACH row EXECUTE PROCEDURE protect_events_on_update();
REVOKE ALL PRIVILEGES ON TABLE events FROM user;
GRANT INSERT ON TABLE events TO user;
GRANT SELECT ON TABLE events TO user;
Omaruchan
(101 rep)
Sep 14, 2021, 11:58 PM
• Last activity: Jun 14, 2025, 10:03 PM
3
votes
1
answers
21077
views
MySQL - Truncated incorrect datetime value on update statement
I have an application which produces log files. These log files include a `Timestamp` field in the format `2015-08-25T09:35:01 UTC`. (there will be approximately 60 logs produced at 0.25 GB per day). I need to import these log files into MySQL for analysis. But I have a problem converting the `Times...
I have an application which produces log files. These log files include a
Timestamp
field in the format 2015-08-25T09:35:01 UTC
.
(there will be approximately 60 logs produced at 0.25 GB per day).
I need to import these log files into MySQL for analysis. But I have a problem converting the Timestamp
to DateTime
.
example:
CREATE TABLE test1 (
TIMESTAMP
varchar(25) DEFAULT NULL,
EVENT_TIME
datetime DEFAULT NULL
);
INSERT INTO test1 (TIMESTAMP
)
VALUES
('2015-08-25T09:35:01 UTC'),
('2015-08-25T09:36:01 UTC'),
('2015-08-25T09:37:01 UTC'),
('2015-08-25T09:38:01 UTC'),
('2015-08-25T09:39:01 UTC');
So far so good. I can now run a SELECT
query to get the datetime
SELECT CAST(TIMESTAMP
AS datetime) FROM test1;
But, if I try to update the table with the datetime
format I get an error
UPDATE test1 SET EVENT_TIME
= CAST(TIMESTAMP
AS datetime);
Error Code: 1292. Truncated incorrect datetime value: '2015-08-25T09:35:01 UTC'
Is there a way to do this? as I really need the datetime field in the database, so I don't have to do the CAST
every time I run a query.
I was also looking to partition the table by date, as there will be a lot of data produced, and so I only want to keep the minimum amount of data, and then drop the oldest partitions once I am done.
IGGt
(2276 rep)
Sep 1, 2015, 09:57 AM
• Last activity: Jun 13, 2025, 12:03 PM
0
votes
1
answers
235
views
SQL SERVER: Header Blocker Across Multiple Databases with wait type CXCONSUMER
We have an instance of SQL Server which has multiple databases. A process in one database seems to be blocking a process in another database. When I look in activity monitor I can see a header blocker (A one in the header blocker column). This seems to be blocking other processes in different databa...
We have an instance of SQL Server which has multiple databases. A process in one database seems to be blocking a process in another database. When I look in activity monitor I can see a header blocker (A one in the header blocker column). This seems to be blocking other processes in different databases. I can see their ids in the blocked by column when I select one from the drop down. Am I correct that it is cross database blocking? I didn't think this was possible. They are all running exactly the same stored procedure, but they have their own instance in each database. They are doing updates and inserts, but only within their own databases.
eg
UPDATE SCA
SET SCA.date_last_read_only = TDR.date_seen
FROM [dbo].[SINGLE_CERT_ACC] SCA
INNER JOIN #TMP_DELTA_READONLY TDR
ON SCA.id = TDR.id
SET @RecsUpdated = @RecsUpdated + @@ROWCOUNT
SQLMIKE
(437 rep)
Feb 14, 2020, 05:04 PM
• Last activity: Jun 7, 2025, 01:00 PM
Showing page 1 of 20 total questions