Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
2
answers
150
views
Postgres trigger check if insert affected column
I have a table where some columns have to be "protected", meaning they should not be written to by most users. I want to enforce this with a trigger function (as there is additional logic to the decision if the user , not relevant here). My problem is that I am not able to find out if a user **attem...
I have a table where some columns have to be "protected", meaning they should not be written to by most users. I want to enforce this with a trigger function (as there is additional logic to the decision if the user , not relevant here).
My problem is that I am not able to find out if a user **attempted to insert into** the **protected column**. My first idea was to check the value of the
NEW
variable. The problem: Columns can have default values of all kinds of types which would be included in the NEW
, so a simple check for NULL
(which could also be set by the user to overwrite a default value) will not do the trick.
What I would like to avoid:
- encoding the default value of the column into the code of the trigger function as this reduces maintainability and since I'm automatically generating the code for the trigger function and the column could have any data type the code generation could get pretty complex.
- making a query to INFORMATION_SCHEMA
to load the default of the column. Doing this for every insert seems pretty wasteful to me.
One possible solution that I see:
- making an IMMUTABLE
function that accesses the INFORMATION_SCHEMA
to load the default value for the column. As far as I understand postgres should only make the query once and not for every insert since the result of the function call is cached.
Are there any better solutions? Is there any way to directly check if an INSERT
affected a specific column?
J. Dietz
(1 rep)
Aug 15, 2024, 11:13 AM
• Last activity: Jul 24, 2025, 06:17 PM
0
votes
2
answers
429
views
Trigger to enforce unique constraint on text column (Geometry) to resolve INSERT conflicts
I have a simple table with a geometry column like so: ```sql CREATE TABLE geomtable ( gid serial, geom geometry, trip_count integer ); ``` I want a unique constraint on the geometry column, such that if the same geometry is being inserted again then, the ON CONFLICT clause should add the trip_count...
I have a simple table with a geometry column like so:
CREATE TABLE geomtable
(
gid serial,
geom geometry,
trip_count integer
);
I want a unique constraint on the geometry column, such that if the same geometry is being inserted again then, the ON CONFLICT clause should add the trip_count to the existing row.
So my insert query looks something like this:
INSERT INTO geomtable (geom, trip_count) values (, 123)
ON CONFLICT ON CONSTRAINT
DO UPDATE SET trip_count = geomtable.trip_count + EXCLUDED.trip_count;
I tried the approach to enforcing equality constraint as described here like so:
CREATE FUNCTION equality_constraint_func(
id INT,
gm GEOMETRY
)
RETURNS boolean AS
$$
SELECT NOT EXISTS (
SELECT 1
FROM geomtable AS a
WHERE a.gid id
AND a.geom && gm
AND ST_Equals(a.geom, gm)
);
$$
LANGUAGE sql
;
And adding a CHECK constraint like so :
ALTER TABLE geomtable
ADD CONSTRAINT equality_constraint
CHECK (equality_constraint_func(gid, geom))
;
**However, this does not help me perform the DO UPDATE part of conflict resolution.** It just raises an error on duplicate geometry. I of course can't create a primary key out of my geom column as I get the index row requires 42632 bytes, maximum size is 8191 error. I get a similar error on creating a unique index on the geom column. I tried creating a unique constraint using the above syntax by replacing CHECK with UNIQUE, but I get a syntax error. How would I go writing a trigger to enforce the unique constraint?
Specifically, how do I handle the conflict case? Let the insert happen as usual and then perform a delete etc.
My version information is as under:
PostgreSQL 12.4 (Ubuntu 12.4-1.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-10ubuntu2) 9.3.0, 64-bit POSTGIS="3.0.1 ec2a9aa" [EXTENSION] PGSQL="120" GEOS="3.8.0-CAPI-1.13.1 " PROJ="6.3.1" LIBXML="2.9.10" LIBJSON="0.13.1" LIBPROTOBUF="1.3.3" WAGYU="0.4.3 (Internal)"
Chintan Pathak
(143 rep)
Aug 23, 2020, 08:11 PM
• Last activity: Jul 23, 2025, 02:56 PM
1
votes
1
answers
2767
views
Insert multiple rows into a table with id from other table if not exists insert to other table
I have done similar task where I can insert a row into a table if data doesn't exists: WITH user_exists AS ( Select id from users where username='%s' ), user_new AS ( INSERT INTO users (username) SELECT w.username FROM (values ('%s')) w(username) WHERE not exists (SELECT 1 FROM users u WHERE u.usern...
I have done similar task where I can insert a row into a table if data doesn't exists:
WITH
user_exists AS (
Select id from users where username='%s'
),
user_new AS (
INSERT INTO users (username)
SELECT w.username FROM (values ('%s')) w(username)
WHERE not exists
(SELECT 1 FROM users u WHERE u.username = w.username)
returning id
)
INSERT INTO feedbacks ('static_row', userid)
SELECT
'static_data',
(SELECT id FROM users_exists UNION ALL SELECT id FROM users_new) AS userid
Above works well when we insert a new row to feedbacks table. If user doesn't exists it inserts data in users table and returns id which is used for inserting data to feedbacks table.
But now my use case is, I have to insert multiple rows into the feedback table. Something like this:
user_variable = ['a','b', ...]
Insert into feedbacks ('static_row', userid)
VALUES
('sample_data', (Select if from users where username='a')),
('sample_data', (Select if from users where username='b')),
('sample_data', (Select if from users where username='c'))
For above case, how we can insert a new row to users table if username='b' doesn't exist?
undefined
(151 rep)
Feb 4, 2021, 11:18 AM
• Last activity: Jul 21, 2025, 09:02 PM
0
votes
1
answers
142
views
Postgres FK referencing composite PK INSERT VIOLATION
I am trying to insert into a table (mdata) which has a compost foreign key referencing to a primary keys in another table (measurement), unfortunately I have this error ERROR: insert or update on table "mdata" violates foreign key constraint "FK_mdata" DETAIL: Key (time, measurement_id)=(2022-07-18...
I am trying to insert into a table (mdata) which has a compost foreign key referencing to a primary keys in another table (measurement), unfortunately I have this error
ERROR: insert or update on table "mdata" violates foreign key constraint "FK_mdata"
DETAIL: Key (time, measurement_id)=(2022-07-18 12:35:03.31052, 1) is not present in table "measurement".
SQL state: 23503
Note that the foreign key data exist surly in the reference table. below are my two tables
CREATE TABLE IF NOT EXISTS public.mdata
(
id bigint NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1 START 1 MINVALUE 1 MAXVALUE 9223372036854775807 CACHE 1 ),
value jsonb NOT NULL,
measurement_id bigint NOT NULL,
"time" timestamp without time zone NOT NULL,
CONSTRAINT "PK_medata" PRIMARY KEY (id),
CONSTRAINT "FK_mdata" FOREIGN KEY ("time", measurement_id)
REFERENCES public.measurement ("time", id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE CASCADE
)
and
CREATE TABLE IF NOT EXISTS public.measurement
(
id bigint NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1 START 1 MINVALUE 1 MAXVALUE 9223372036854775807 CACHE 1 ),
"time" timestamp without time zone NOT NULL,
value real NOT NULL,
CONSTRAINT "PK_measurement" PRIMARY KEY ("time", id),
)
the problem is that I don't see the issue, because I am sure what I am inserting into mdata (measurement_id and "time") surly exist in the measurement table. Could someone give a hint about the problem ?
I am trying to insert like this
INSERT INTO public.mdata(
id, value, measurement_id, "time")
VALUES (8, '{}',1 , '2022-07-18 12:35:03.31052');
Raziel
(101 rep)
Jul 19, 2022, 10:34 AM
• Last activity: Jul 21, 2025, 08:08 AM
0
votes
1
answers
153
views
MySQL Insert/Update across 3 tables with 1m+ rows
To start with, I know nothing of database design, so I apologise if this seems obvious to others. I have been researching up to 3NF over the last few weeks, and I think I have a layout that works. I have a database with 1m+ rows, currently organised as follows: Table: MasterTable Rows: ID, FirstName...
To start with, I know nothing of database design, so I apologise if this seems obvious to others. I have been researching up to 3NF over the last few weeks, and I think I have a layout that works.
I have a database with 1m+ rows, currently organised as follows:
Table: MasterTable
Rows: ID, FirstName, LastName, PetName, PetAge
I would like to split it as follows:
Table: People
PersonID (PK), FirstName, LastName
Table: Pets
PetID (PK), PetName, PetAge
Table: Records
RecordID (PK), MasterTable.ID, People.PersonID, Pets.PetID
PKs in all cases auto-increment so that more records can be added later.
The people and pets tables have been populated using:
INSERT INTO Pets(PetName, PetAge)
SELECT PetName, PetAge
From MasterTable
WHERE 1
INSERT INTO People(FirstName, LastName)
SELECT PetName, PetAge
From MasterTable
WHERE 1
INSERT INTO Records(ID)
SELECT ID
From MasterTable
WHERE 1
So I have three tables. When I try to create the Records table, I can't get anything to work.
I have tried:
INSERT INTO Records(PersonID, PetID, ID)
SELECT People.PersonID, Pets.PetID, MasterTable.ID
FROM MasterTable
LEFT JOIN People ON MasterTable.FirstName = People.FirstName AND People.LastName = MasterTable.LastName
LEFT JOIN Pets ON Pets.PetName = MasterTable.PetName AND Pets.PetAge = MasterTable.PetAge
WHERE 1
I think the WHERE clause might be the problem. I have tried
WHERE Pets.PetName = MasterTable.PetName
and almost every kind of WHERE I can think of.
I have a few questions I'd really appreciate some help with as I'm going out of my mind here.
1) Does it matter the order of the LEFT JOIN clauses? Does it matter which table is specified first and which is specified last?
2) I initially tried INNER JOIN but I figure it's just going to join more columns than is necessary, is that right?
3) If I am inserting firstname and lastname, I can't match on firstname and lastname, right? As in, create the firstname lastname entries and then use that ID to match the next join?
It seems simple enough to split this into three, assign a PK to each, and then create a finale table where PKs relate to PKs, but apparently it's not.
When I add '''LIMIT 5''' the select returns the correct info. Without the limit clause, all my attempts have run for over 24h and not finished. Either they have been stuck copying *everything* to temp tables, or they have just said "selecting data" as the status.
Can someone please help?
Sorry if something doesn't make sense, I'll clarify as I go.
ernoh
(1 rep)
Sep 3, 2020, 09:07 PM
• Last activity: Jul 18, 2025, 10:04 AM
0
votes
1
answers
149
views
Why insert statements get slow when having too many indexes?
I am having a slowness in my insert statements when working with MongoDB, and when I did a quick investigation, I though this might happened because the table has 11 indexes. Plus, I know that when I insert one record on a table having some indexes, the database has to update all relative indexes be...
I am having a slowness in my insert statements when working with MongoDB, and when I did a quick investigation, I though this might happened because the table has 11 indexes. Plus, I know that when I insert one record on a table having some indexes, the database has to update all relative indexes before informing me that insertion is completed, but my question is:
**Why is the database will inform me that the insert/update statement is completed after all indexes are also inserted or updated?, why not insertion to main table will be enough and the indexes will be updated later (with different thread for example!?.**
I wish my question is clear enough, help please :)
سجاد أحمد
(1 rep)
Oct 2, 2021, 09:37 AM
• Last activity: Jul 16, 2025, 03:09 AM
0
votes
1
answers
180
views
Is it more efficient to do SELECT and compare in Java or DELETE and INSERT
I am hitting a **REST API** to get data from a service. I transform this data and store it in a database. I will have to do this on some interval, 15 minutes, and then make sure this database has latest information. I am doing this in a Java program. I am wondering if it would be better, after I hav...
I am hitting a **REST API** to get data from a service. I transform this data and store it in a database. I will have to do this on some interval, 15 minutes, and then make sure this database has latest information.
I am doing this in a Java program. I am wondering if it would be better, after I have queried all data, to do:
1. **SELECT** statements and compare vs transformed data and do **UPDATEs** (**DELETE** all associated records to what was changed and **INSERT** new)
**OR**
2. **DELETE ALL** and **INSERT ALL** every time.
Option 1 has potential to have a lot less transactions, guaranteed SELECT on all records because we are comparing, but potentially not a lot of UPDATEs since I don't expect data to be changing much. But it has downside of doing comparisons on all records to detect a change.
I am planning on doing this using Spring Boot, JPA layer and possibly Postgres
user137760
(1 rep)
Nov 1, 2017, 06:02 PM
• Last activity: Jul 10, 2025, 01:16 PM
0
votes
3
answers
166
views
How to insert/combine values from multiple tables?
I have multiple tables (12 for each month) that have 3 columns each ("code", "test_name", "value") Example: _______________________________ January code | test_name | value 0001 name1 17 0002 name2 45 0005 name5 12 ______________________________ February code | test_name | value 0001 name1 3 0002 na...
I have multiple tables (12 for each month) that have 3 columns each ("code", "test_name", "value")
Example:
_______________________________
January
code | test_name | value
0001 name1 17
0002 name2 45
0005 name5 12
______________________________
February
code | test_name | value
0001 name1 3
0002 name2 7
0004 name4 13
_______________________________
March
code | test_name | value
0001 name1 6
0006 name6 32
0007 name7 41
_______________________________
How do I merge or insert or combine it in sql into one table that would look like this:
Year
code | test_name | January | February | March | ...
0001 name1 17 3 6
0002 name2 45 7 0
0003 name3 0 0 0
0004 name4 0 13 0
0005 name5 12 0 0
0006 name6 0 0 32
0007 name7 0 0 41
Mladen G
(1 rep)
Mar 15, 2016, 07:31 PM
• Last activity: Jul 6, 2025, 02:06 PM
0
votes
1
answers
179
views
Migrate columns from table A to table B, and update A with a reference to B
I have a table A with some columns that need to be migrated to a new table B. But I also need to create a column in table A to reference the id's from the new columns in the table B. I've tried doing it this way but it's not working: ```pgsql WITH cte AS ( INSERT INTO B (c1) SELECT c1 FROM A RETURNI...
I have a table A with some columns that need to be migrated to a new table B. But I also need to create a column in table A to reference the id's from the new columns in the table B. I've tried doing it this way but it's not working:
WITH cte AS (
INSERT INTO B (c1)
SELECT c1
FROM A
RETURNING id, A.id as "aId"
)
UPDATE A
SET bId = cte.id
FROM cte
INNER JOIN A on cte."aId" = A.id
;
This is the error message:
ERROR: missing FROM-clause entry for table A
LINE 5: RETURNING id, A.id as "aId"
^
SQL state: 42P01
Character: 168
I'm assuming this is because Postgres doesn't allow selecting from the table you are inserting the values and it only allows to return columns from the inserted row.
So how to accomplish this? I know it can probably work with a for
loop, but is there a cleaner way? I also tried updating table A directly from results of the insertion, but that didn't work as Postgres doesn't allow using the insert
statement in a subquery for updating.
Warix3
(1 rep)
Jun 26, 2023, 07:04 PM
• Last activity: Jul 4, 2025, 11:04 PM
0
votes
3
answers
763
views
How do I close log files that are open in OracleServiceORCL after inputting data into Oracle?
I inserted data into a table called ```cell_local_stage1```. With this procedure: ``` CREATE TABLE cell_local_stage1 ( construct_id NUMBER(10), n_term VARCHAR2 (50), enz_name VARCHAR2 (50), c_term VARCHAR2 (50), cpp VARCHAR2 (50), mutations VARCHAR2 (50), g_batch VARCHAR2 (50), p_batch VARCHAR2 (50)...
I inserted data into a table called
I was able to delete the CSV files, but not the log files. I didn't know I could even open these log files in Oracle.
Sorry for such a basic question.
. With this procedure:
CREATE TABLE cell_local_stage1
(
construct_id NUMBER(10),
n_term VARCHAR2 (50),
enz_name VARCHAR2 (50),
c_term VARCHAR2 (50),
cpp VARCHAR2 (50),
mutations VARCHAR2 (50),
g_batch VARCHAR2 (50),
p_batch VARCHAR2 (50),
c_batch VARCHAR2 (50),
emptycol VARCHAR2(50),
ind_var_number NUMBER,
study_id VARCHAR2 (50),
char_id NUMBER,
concentration NUMBER (6, 2),
valid NUMBER (1) DEFAULT 1,
endpoint1 NUMBER (1),
method1 VARCHAR2 (50),
methodv1 VARCHAR2 (50))
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY data_to_input
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
SKIP 1
BADFILE bad_files:'badflie_insert_into_cell_local_stage1_from_cell_local.bad'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
MISSING FIELD VALUES ARE NULL
)
LOCATION ('CELL_LOCAL.CSV')
) REJECT LIMIT UNLIMITED;
I got a log file that has this information:
LOG file opened at 08/17/20 22:00:47
Total Number of Files=1
Data File: CELL_LOCAL.CSV
Log File: CELL_LOCAL_STAGE1_4908_22980.log
And when I try to delete that file this messages says that the file is still open:

ellie-lumen
(271 rep)
Aug 18, 2020, 03:35 PM
• Last activity: Jun 20, 2025, 06:16 PM
0
votes
2
answers
239
views
Inserting a list of emails into multiple rows on the same column
I have a list of emails of staff members and would like to: 1. insert them into multiple rows on the same column (single column only for emails) 2. the same question above but would like to replace them with old ones.
I have a list of emails of staff members and would like to:
1. insert them into multiple rows on the same column (single column only for emails)
2. the same question above but would like to replace them with old ones.
Julia
(1 rep)
Feb 20, 2017, 12:13 PM
• Last activity: Jun 9, 2025, 04:04 AM
26
votes
4
answers
39777
views
How to limit maximum number of rows in a table to just 1
I have a configuration table in my SQL Server database and this table should only ever have one row. To help future developers understand this I'd like to prevent more than one row of data being added. I have opted to use a trigger for this, as below... ALTER TRIGGER OnlyOneConfigRow ON [dbo].[Confi...
I have a configuration table in my SQL Server database and this table should only ever have one row. To help future developers understand this I'd like to prevent more than one row of data being added. I have opted to use a trigger for this, as below...
ALTER TRIGGER OnlyOneConfigRow
ON [dbo].[Configuration]
INSTEAD OF INSERT
AS
BEGIN
DECLARE @HasZeroRows BIT;
SELECT @HasZeroRows = CASE
WHEN COUNT (Id) = 0 THEN 1
ELSE 0
END
FROM
[dbo].[Configuration];
IF EXISTS(SELECT [Id] FROM inserted) AND @HasZeroRows = 0
BEGIN
RAISERROR ('You should not add more than one row into the config table. ', 16, 1)
END
END
This does not throw an error but is not allowing the first row to go in.
Also is there a more effective / more self explaining way of limiting the number of rows that can be inserted into a table to just 1, than this? Am I missing any built in SQL Server feature?
Dib
(447 rep)
Jun 21, 2016, 08:25 AM
• Last activity: Jun 6, 2025, 05:11 PM
0
votes
1
answers
97
views
DROP -> SELECT INTO V.S. TRUNCATE -> INSERT INTO
I have multiple temporary tables stored in the ```tempdb``` schema for an SSIS pipeline that runs daily. The pipeline extracts data from multiple tables and stores it in the temp tables, which the data of the temp tables is later used to store in a different database. My question is about the perfor...
I have multiple temporary tables stored in the
schema for an SSIS pipeline that runs daily. The pipeline extracts data from multiple tables and stores it in the temp tables, which the data of the temp tables is later used to store in a different database.
My question is about the performance of the creation and deletion of the temp table, I want to know which approach is more optimal. Which of the following approaches is the better option? And what are the disadvantages and advantages of each one?
1-
temp table if it doesn't already exist -->
temp table from previous data before inserting new data -->
the new data into temp table.
2-
temp table if it exists --> use the statement INTO
to insert data directly without creating a temp table in a separate statement.
edit: the tables in the tempdb are created like this TABLE tempdb..Table1
Zaid Allawanseh
(3 rep)
Jun 2, 2025, 07:01 AM
• Last activity: Jun 2, 2025, 02:00 PM
0
votes
1
answers
265
views
How to insert one record after every log record in PostgreSQL
I have a PostgreSQL table that store logs data with each row is one log and fields are `logname, start, end`. I want to visualize the logs, and due to a limit of Grafana the visualize tool I choose, I feel the need to insert one new record with stimestamp one second after each log, let's say with lo...
I have a PostgreSQL table that store logs data with each row is one log and fields are
logname, start, end
. I want to visualize the logs, and due to a limit of Grafana the visualize tool I choose, I feel the need to insert one new record with stimestamp one second after each log, let's say with logname 'Idle'. For better explanation:
| Start | Logname | End |
| -------- | ---------- |---|
| 10:00:00 | log-in |10:00:01|
| 10:30:00 | call |11:00:00|
| 10:55:00 | some action (still in call) | 11:05:00|
Then I want to manually add logname 'Idle' one second after each available logs
| Start | Logname | End |
| -------- | ---------- |---|
| 10:00:00 | log-in |10:00:01|
|**10:30:02 (1 second after the end time of the previous log)**|**idle**|**10:30:03**|
| 10:30:00 | call |11:00:00|
|10:55:00| some action|11:05:00
|**11:05:01 (1 second after)**|**idle**|**11:05:01**|
I think of manually calculating each row and then inserting them, but how should I do it in terms of a PostgreSQL query?
**Update**
Thanks to Akina & dwhitemv, I see it's illogical to insert a new record after *every* available record. Instead, the question should be **inserting a new record into time ranges only if there are no available records or overlapping records.** Aside from inserting, a new problem is how to check if there is a record(s) in the time range. I updated the tables above. I feel this makes it complicated since using NOT EXISTS
& ON CONFLICT
can't really check if records exist between 11:00 and 11:05?
Lacie
(101 rep)
Nov 2, 2022, 10:08 PM
• Last activity: May 20, 2025, 06:06 AM
0
votes
1
answers
315
views
Multiple inserts using select row and variable combination with one SQL statement
I'm working with SQL 2008 R2 and have a need to find a specific row in a table that has three fields and then insert multiple rows into the same table with each insert using two values from the found row but the third value is a variable that is different on each insert. Can this be done? Here is a...
I'm working with SQL 2008 R2 and have a need to find a specific row in a table that has three fields and then insert multiple rows into the same table with each insert using two values from the found row but the third value is a variable that is different on each insert. Can this be done? Here is a non-functioning SQL statement that, I hope, shows what I am trying to do:
INSERT INTO Routings_Doors_Options
(HeaderID, Option2, Option1)
VALUES (Routings_Doors_Options_1.HeaderID, Routings_Doors_Options_1.Option2, 'OGGA1'),
(Routings_Doors_Options_1.HeaderID, Routings_Doors_Options_1.Option2, 'OGGA2'),
(Routings_Doors_Options_1.HeaderID, Routings_Doors_Options_1.Option2, 'OGGA3'),
(Routings_Doors_Options_1.HeaderID, Routings_Doors_Options_1.Option2, 'OGGA4'),
(Routings_Doors_Options_1.HeaderID, Routings_Doors_Options_1.Option2, 'OGGA6'),
(Routings_Doors_Options_1.HeaderID, Routings_Doors_Options_1.Option2, 'OGGA7'),
(Routings_Doors_Options_1.HeaderID, Routings_Doors_Options_1.Option2, 'OGGA8')
SELECT HeaderID, Option2
FROM Routings_Doors_Options AS Routings_Doors_Options_1
WHERE (Option1 = 'OGGA')
Randy S
(1 rep)
Apr 18, 2019, 04:57 PM
• Last activity: May 16, 2025, 06:06 AM
0
votes
1
answers
289
views
What is the best way to check if a value in a table has changed when writing to a database?
Let me provide you with an example. Let's say I have an `html_tag` with two columns, `tag_type` and `tag_value` For example, let's say I have a record with the value "link" under the`tag_type` field and "https://dba.stackexchange.com" under the `tag_value` field. Now, how would I approach this for a...
Let me provide you with an example. Let's say I have an
html_tag
with two columns, tag_type
and tag_value
For example, let's say I have a record with the value "link" under thetag_type
field and "https://dba.stackexchange.com " under the tag_value
field.
Now, how would I approach this for auditing purposes? I want to only INSERT
into this table when one of the tag_value
fields corresponding to a tag_type
has changed, because it would not make sense to add redundant records to this table.
Is this a database design problem I'm having, or is there advanced SQL I'm unaware of that would accomplish this? Or, rather, would the Java I am getting these values from initially be the place to handle such a thing? Example of how I would do this in Java (psuedo code):
var newTag = extract the tag's value from my webpage;
var tag_value = SELECT tag_value from html_tag
WHERE tag_type = 'link'
if (new_tag != tag_value) then (INSERT INTO html_tag)...
Should I handle this at the database level, or java level, like directly above?
brennvo
(91 rep)
Nov 6, 2018, 03:04 PM
• Last activity: May 16, 2025, 12:07 AM
0
votes
1
answers
323
views
unable to add data to table because related record is required
I have an access database with my work that I am trying to insert into but I keep getting. > 'You cannot add or change a record because a related record is required in table 'Projects'.' I'm running this query: `INSERT INTO Tasks (Assigned,Project,Description) VALUES (@assign,@project,@description)`...
I have an access database with my work that I am trying to insert into but I keep getting.
> 'You cannot add or change a record because a related record is required in table 'Projects'.'
I'm running this query:
With this code in C# with an OleDb... commands and connections Which are working fine for other query's:
INSERT INTO Tasks (Assigned,Project,Description) VALUES (@assign,@project,@description)
On this Structure: 
//the assign id is already known and is of type integer. string query = "SELECT Project_ID FROM Projects WHERE Project_Name = @project"; OleDbConnection con = new OleDbConnection(con_string); OleDbCommand cmd = new OleDbCommand(query, con); cmd.Parameters.AddWithValue("@project", projectComboBox.Text); con.Open(); OleDbDataReader reader = cmd.ExecuteReader(); while (reader.Read()) { project_id = Convert.ToInt16(reader); Console.WriteLine(project_id); } con.Close(); Console.WriteLine("submit: " + project_id + " " + employee_id + " " + descriptionTextBox.Text + " " + monthCalendar1.SelectionStart.ToString("MM/DD/YYYY")); Console.WriteLine(monthCalendar1.SelectionStart); query = "INSERT INTO Tasks (Assigned,Project,Description) VALUES (@assign,@project,@description)"; con = new OleDbConnection(con_string); cmd = new OleDbCommand(query, con); cmd.Parameters.AddWithValue("@project", project_id); cmd.Parameters.AddWithValue("@assign", employee_id); cmd.Parameters.AddWithValue("@description", descriptionTextBox.Text.ToString()); //cmd.Parameters.AddWithValue("@deadline", monthCalendar1.SelectionStart); con.Open(); cmd.ExecuteNonQuery(); con.Close(); this.Close();I have tried looking at other examples of this problem and I don't understand why I'm getting this error. @project has a valid id number of the primary key for a Project, @assign has a valid employee id as well and @description is string of text. Thanks for any help.
Chris Johnson
(1 rep)
Aug 6, 2018, 03:30 PM
• Last activity: Apr 25, 2025, 07:07 AM
0
votes
1
answers
352
views
In Oracle SQL 19c, how can I INSERT a new record into a supertype and its related subtypes in one statement?
I'm designing a property management database that will hold - OWNER, CUSTOMER, and STAFF information. As these use many of the same attributes, I intend to create tables for PERSON, ADDRESS and CONTACT. The primary key for PERSON (`personID`) will be auto-incremented and used as the foreign key for...
I'm designing a property management database that will hold - OWNER, CUSTOMER, and STAFF information.
As these use many of the same attributes, I intend to create tables for PERSON, ADDRESS and CONTACT.
The primary key for PERSON (
personID
) will be auto-incremented and used as the foreign key for ADDRESS, CONTACT, OWNER, CUSTOMER, and STAFF.
When it comes to adding a new record, is there a way to INSERT into PERSON, ADDRESS, CONTACT, and OWNER/CUSTOMER/STAFF at the same time using a simple statement? Or is there another way to retrieve the generated personID
and use it to insert into the other tables?
I'm testing on Oracle SQL Live 19c.
Gospel77
(1 rep)
Jun 14, 2022, 03:54 PM
• Last activity: Apr 21, 2025, 08:05 PM
1
votes
1
answers
1074
views
MySQL 5.6 Bug Amazon RDS with replica. AFTER INSERT TRIGGER is not working. Solutions?
I have `MasterDB` and `ReplicaDB` in Amazon RDS environment. I need to move 1 table (`MYTable`) from replica to other external DB (`ExternalDB`). My solution (do not work) was to add a `AFTER TRIGGER` listening for `UPDATES` and `INSERTS` to MYTable (and only add that TRIGGER in ReplicaDB) and copy...
I have
MasterDB
and ReplicaDB
in Amazon RDS environment. I need to move 1 table (MYTable
) from replica to other external DB (ExternalDB
).
My solution (do not work) was to add a AFTER TRIGGER
listening for UPDATES
and INSERTS
to MYTable (and only add that TRIGGER in ReplicaDB) and copy everything on MyTableLog. Them pooling MyTableLog (and remove already processed records).
**Problem**: It looks like the RDS replica it is not firing the AFTER INSERT
event (is only firing AFTER UPDATES
). However I tested the solution in 5.7 and it worked.
Any ideas? It is a bug in MySQL
? Any solution?
**UPDATES:**
**1- I'm adding a new trigger on SlaveDB
** (it is not a trigger replicated from MasterDB
)
**2- It is working in 5.7 -> 5.7 ... the issue is in 5.6 -> 5.6 MySQL
DB**
**3- Im adding TWO triggers (both works in 5.7 but only 1 works in 5.6)**
**4- Update Trigger (works in 5.6 and 5.7)**
CREATE TRIGGER after_mytable_update
AFTER UPDATE ON mytable
FOR EACH ROW
BEGIN
INSERT INTO mytable_log
SET is_new = 0, is_processed = 0;
END
**5- Insert Trigger (DO NOT WORK IN MYSQL 5.6)**
CREATE TRIGGER after_mytable_insert
AFTER INSERT ON mytable
FOR EACH ROW
BEGIN
INSERT INTO mytable_log
SET is_new = 1, is_processed = 0;
END
**As simple as it is. The INSERT
trigger it **IS NOT** being executed in MySQL 5.6
**
(From Comment)
CREATE TABLE mytable_log_replica (
replica_id int(11) unsigned NOT NULL AUTO_INCREMENT,
is_new int(11) DEFAULT '0',
is_processed int(11) DEFAULT '0',
id int(11) unsigned NOT NULL,
stamp datetime DEFAULT NULL,
user_id int(11) DEFAULT NULL,
name varchar(64) DEFAULT NULL,
address varchar(64) DEFAULT NULL,
transaction_status varchar(64) DEFAULT NULL,
ip varchar(64) DEFAULT NULL,
cb_code varchar(16) DEFAULT NULL PRIMARY KEY (replica_id)
) ENGINE=InnoDB AUTO_INCREMENT=240878 DEFAULT CHARSET=latin1
Karel
(11 rep)
Jan 31, 2018, 04:09 PM
• Last activity: Apr 20, 2025, 05:10 PM
2
votes
2
answers
65
views
MariaDB deadlock with transactional outbox pattern
I have a deadlock in 10.6.21-MariaDB-ubu2004. Here's my schema: ``` CREATE TABLE INT_CHANNEL_MESSAGE ( MESSAGE_ID CHAR(36) NOT NULL, GROUP_KEY CHAR(36) NOT NULL, CREATED_DATE BIGINT NOT NULL, MESSAGE_PRIORITY BIGINT, MESSAGE_SEQUENCE BIGINT NOT NULL AUTO_INCREMENT UNIQUE, MESSAGE_BYTES BLOB, REGION...
I have a deadlock in 10.6.21-MariaDB-ubu2004.
Here's my schema:
CREATE TABLE INT_CHANNEL_MESSAGE
(
MESSAGE_ID CHAR(36) NOT NULL,
GROUP_KEY CHAR(36) NOT NULL,
CREATED_DATE BIGINT NOT NULL,
MESSAGE_PRIORITY BIGINT,
MESSAGE_SEQUENCE BIGINT NOT NULL AUTO_INCREMENT UNIQUE,
MESSAGE_BYTES BLOB,
REGION VARCHAR(100) NOT NULL,
PRIMARY KEY (REGION, GROUP_KEY, CREATED_DATE, MESSAGE_SEQUENCE)
) ENGINE = InnoDB;
CREATE INDEX INT_CHANNEL_MSG_DELETE_IDX ON INT_CHANNEL_MESSAGE (REGION, GROUP_KEY, MESSAGE_ID);
Initially, I have a single row in the table. Let's assume the values for each row are unique, only REGION
and GROUP_KEY
are fixed.
Transaction #1 inserts two rows using separate INSERT
statements:
INSERT into INT_CHANNEL_MESSAGE(
MESSAGE_ID,
GROUP_KEY,
REGION,
CREATED_DATE,
MESSAGE_PRIORITY,
MESSAGE_BYTES)
values (?, ?, ?, ?, ?, ?)
First insert is executed, then transaction hangs. The isolation level for Transaction #1 is REPEATABLE_READ
(although tried changing to READ_COMMITED
).
Transaction #2 starts immediately after first insert (of Transaction #1) is executed (it is triggered by the application). The isolation level is set to READ_COMMITED
. The initial row is selected for update, then the transaction hangs on DELETE
call:
SELECT INT_CHANNEL_MESSAGE.MESSAGE_ID, INT_CHANNEL_MESSAGE.MESSAGE_BYTES
from INT_CHANNEL_MESSAGE
where INT_CHANNEL_MESSAGE.GROUP_KEY = ? and INT_CHANNEL_MESSAGE.REGION = ?
order by CREATED_DATE, MESSAGE_SEQUENCE LIMIT 1 FOR UPDATE SKIP LOCKED
DELETE from INT_CHANNEL_MESSAGE where MESSAGE_ID=? and GROUP_KEY=? and REGION=?
SHOW ENGINE INNODB STATUS
output:
| InnoDB | |
=====================================
2025-04-04 16:06:44 0x7fc6241b3700 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 21 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 0 srv_active, 0 srv_shutdown, 2376 srv_idle
srv_master_thread log flush and writes: 2376
----------
SEMAPHORES
----------
------------
TRANSACTIONS
------------
Trx id counter 1646
Purge done for trx's n:o < 1646 undo n:o < 0 state: running
History list length 2
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 1643, ACTIVE 31 sec
2 lock struct(s), heap size 1128, 1 row lock(s), undo log entries 1
MariaDB thread id 76, OS thread handle 140489014748928, query id 8535 172.21.0.1 nbs
---TRANSACTION 1640, ACTIVE 31 sec fetching rows
mysql tables in use 1, locked 1
LOCK WAIT 3 lock struct(s), heap size 1128, 3 row lock(s), undo log entries 1
MariaDB thread id 75, OS thread handle 140488995890944, query id 8537 172.21.0.1 nbs Updating
SET STATEMENT SQL_SELECT_LIMIT=1 FOR DELETE from INT_CHANNEL_MESSAGE where MESSAGE_ID='ce0ce618-2430-0b4c-727b-7250e5388f15' and GROUP_KEY='cb18446f-633c-3a46-b5ac-95ab539126d1' and REGION='DEFAULT'
------- TRX HAS BEEN WAITING 31293172 us FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 55 page no 3 n bits 320 index PRIMARY of table nbs_biometric
.INT_CHANNEL_MESSAGE
trx id 1640 lock_mode X locks rec but not gap waiting
Record lock, heap no 3 PHYSICAL RECORD: n_fields 9; compact format; info bits 0
0: len 7; hex 44454641554c54; asc DEFAULT;;
1: len 30; hex 63623138343436662d363333632d336134362d623561632d393561623533; asc cb18446f-633c-3a46-b5ac-95ab53; (total 36 bytes);
2: len 8; hex 80000196018d82b2; asc ;;
3: len 8; hex 8000000000000012; asc ;;
4: len 6; hex 00000000066b; asc k;;
5: len 7; hex bf000001410110; asc A ;;
6: len 30; hex 63623533326436662d343362352d393164352d636561612d623965616434; asc cb532d6f-43b5-91d5-ceaa-b9ead4; (total 36 bytes);
7: SQL NULL;
8: len 30; hex aced0005737200346f72672e737072696e676672616d65776f726b2e6d65; asc sr 4org.springframework.me; (total 1252 bytes);
------------------
---TRANSACTION (0x7fc6388d7180), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
--------
FILE I/O
--------
Pending flushes (fsync) log: 0; buffer pool: 0
166 OS file reads, 332 OS file writes, 594 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf: size 1, free list len 0, seg size 2, 0 merges
merged operations:
insert 0, delete mark 0, delete 0
discarded operations:
insert 0, delete mark 0, delete 0
0.00 hash searches/s, 0.00 non-hash searches/s
---
LOG
---
Log sequence number 891776
Log flushed up to 891776
Pages flushed up to 42676
Last checkpoint at 42664
0 pending log flushes, 0 pending chkp writes
334 log i/o's done, 0.00 log i/o's/second
----------------------
BUFFER POOL AND MEMORY
----------------------
Total large memory allocated 167772160
Dictionary memory allocated 931712
Buffer pool size 8112
Free buffers 7349
Database pages 763
Old database pages 261
Modified db pages 621
Percent of dirty pages(LRU & free pages): 7.654
Max dirty pages percent: 90.000
Pending reads 0
Pending writes: LRU 0, flush list 0
Pages made young 0, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 152, created 611, written 0
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 763, unzip_LRU len: 0
I/O sum:cur, unzip sum:cur
--------------
ROW OPERATIONS
--------------
0 read views open inside InnoDB
Process ID=0, Main thread ID=0, state: sleeping
Number of rows inserted 63, updated 0, deleted 18, read 1182
0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
Number of system rows inserted 0, updated 0, deleted 0, read 0
0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
----------------------------
END OF INNODB MONITOR OUTPUT
============================
|
The deadlock does not happen when Transaction #1 inserts single row. How can I prevent it?
I'm attaching application logs for wider context:
2025-04-04 18:06:13,426 [tx1]: TX START
2025-04-04 18:06:13,426 [tx1]: Executing prepared SQL update
2025-04-04 18:06:13,426 [tx1]: Executing prepared SQL statement [INSERT into INT_CHANNEL_MESSAGE(
MESSAGE_ID,
GROUP_KEY,
REGION,
CREATED_DATE,
MESSAGE_PRIORITY,
MESSAGE_BYTES)
values (?, ?, ?, ?, ?, ?)
]
2025-04-04 18:06:13,427 [tx2]: Executing prepared SQL query
2025-04-04 18:06:13,427 [tx2]: Executing prepared SQL statement [ SELECT INT_CHANNEL_MESSAGE.MESSAGE_ID, INT_CHANNEL_MESSAGE.MESSAGE_BYTES
from INT_CHANNEL_MESSAGE
where INT_CHANNEL_MESSAGE.GROUP_KEY = ? and INT_CHANNEL_MESSAGE.REGION = ?
order by CREATED_DATE, MESSAGE_SEQUENCE LIMIT 1 FOR UPDATE SKIP LOCKED]
2025-04-04 18:06:13,429 [tx2]: Executing prepared SQL update
2025-04-04 18:06:13,429 [tx2]: Executing prepared SQL statement [DELETE from INT_CHANNEL_MESSAGE where MESSAGE_ID=? and GROUP_KEY=? and REGION=?]
2025-04-04 18:07:03,430 [tx2]: Error: 1205-HY000: Lock wait timeout exceeded; try restarting transaction
2025-04-04 18:07:03,432 [tx2]: Extracted SQL state class 'HY' from value 'HY000'
2025-04-04 18:07:03,434 [tx1]: Executing prepared SQL update
2025-04-04 18:07:03,434 [tx1]: Executing prepared SQL statement [INSERT into INT_CHANNEL_MESSAGE(
MESSAGE_ID,
GROUP_KEY,
REGION,
CREATED_DATE,
MESSAGE_PRIORITY,
MESSAGE_BYTES)
values (?, ?, ?, ?, ?, ?)
]
2025-04-04 18:07:03,435 [tx2]: Resetting isolation level of JDBC Connection [HikariProxyConnection@1449683964 wrapping org.mariadb.jdbc.Connection@5b1420f9] to 4
2025-04-04 18:07:03,436 [tx1]: TX END
Tomek Stankowski
(41 rep)
Apr 4, 2025, 05:07 PM
• Last activity: Apr 8, 2025, 02:57 PM
Showing page 1 of 20 total questions