Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
2
answers
480
views
Using secondary id as foreign key to other table
So I have a Customer table that is linked to different tables. For instance, a Configuration table is linked to Customer table. Instead of using Customer Id as foreign key to the Configuration table, I would like to introduce a secondary id called Reference. This Reference has a value similar to GUI...
So I have a Customer table that is linked to different tables. For instance, a Configuration table is linked to Customer table.
Instead of using Customer Id as foreign key to the Configuration table, I would like to introduce a secondary id called Reference. This Reference has a value similar to GUID and would like to use this as Foreign Key to the Configuration table. The reason behind this is because I don't want to use the Id of the Customer table. We use integer as a data type for this and I don't want to expose this.
What could be the possible complications on implementing this approach?
rpmansion
(123 rep)
Aug 10, 2020, 01:27 PM
• Last activity: Aug 6, 2025, 11:02 AM
2
votes
1
answers
641
views
Same primary key for multiple tables vs separate primary keys
Lets assume we have an employee table as below **employee** id (PK) | name | date_of_birth | age | created_at We have a scenario where we need to add some metadata for each employee and not every employee will have this metadata, so we create a new table as below **employee_metadata** id (PK) | empl...
Lets assume we have an employee table as below
**employee**
id (PK) | name | date_of_birth | age | created_at
We have a scenario where we need to add some metadata for each employee and not every employee will have this metadata, so we create a new table as below
**employee_metadata**
id (PK) | employee_id (PK) | ... | created_at
There is a 1-1 mapping between both the tables
What would be a better design practice for the primary key of
employee_metadata
table, 1 or 2?
1) A separate auto-incrementing id
2) Using employee_id
of employee
table
Dushyant Sabharwal
(401 rep)
Jan 29, 2023, 10:15 AM
• Last activity: Jul 31, 2025, 07:06 PM
0
votes
5
answers
162
views
Log table without primary key
I have a SQL Server log table that includes a column "Id" This is an identity column, but not a primary key, it is not even indexed. This would have just been set up from some tutorial for adding DB logging, and I'm not sure if the PK was just missed or intentionally left out. Given its logging all...
I have a SQL Server log table that includes a column "Id"
This is an identity column, but not a primary key, it is not even indexed.
This would have just been set up from some tutorial for adding DB logging, and I'm not sure if the PK was just missed or intentionally left out.
Given its logging all concern is to write performance, not read.
I would have thought if its an identity column it would have to be checking the data anyway to do an insert at correct numbers, should this be set up as a pk for that reason?
Is there any benefit to leaving it with no pk?
F Dev
(1 rep)
Sep 12, 2024, 11:15 PM
• Last activity: Jul 16, 2025, 06:01 AM
1
votes
1
answers
170
views
Keys of a relation
I got relation: > R = { (ABCDEF) , (A B F -> C, B C -> F, F -> A, F -> B, D E -> E, E -> D)} keys of this relation are: > {BCE}, {EF} But I don't get it... why? Why not only {EF}.. it got less attributes than {BCE}.. Another example, relation: > R2 = {(ABCDEF), (A -> B C , B -> D , E -> F)} and the...
I got relation:
> R = { (ABCDEF) , (A B F -> C, B C -> F, F -> A, F -> B, D E -> E, E -> D)}
keys of this relation are:
> {BCE}, {EF}
But I don't get it... why? Why not only {EF}.. it got less attributes than {BCE}..
Another example, relation:
> R2 = {(ABCDEF), (A -> B C , B -> D , E -> F)}
and the key is:
> {AE}
why only one? Why not:
> {AE}, {ABE}
ABE also determines all other attributes, why is it not a key?
Ariel Grabijas
(125 rep)
Jan 19, 2014, 04:29 PM
• Last activity: Jul 14, 2025, 07:02 PM
1
votes
1
answers
168
views
Mysql auto insert record in primary table if not exist
Is it possible to auto insert a record into the primary table if the record does not exist when adding a foreign key? For example, assume these tables: - user(id, name, age) - topic(id, name) - post(userId, topicId, text, createdAt, updatedAt) Now i am pulling posts from some source and saving the r...
Is it possible to auto insert a record into the primary table if the record does not exist when adding a foreign key?
For example, assume these tables:
- user(id, name, age)
- topic(id, name)
- post(userId, topicId, text, createdAt, updatedAt)
Now i am pulling posts from some source and saving the records in the
post
table. But sometimes the data that is being returned contains a userId
or a topicId
that is not yet in my database. So everytime i would have to check if the user
and topic
records exist then save if not. Only then my post
record would be valid and saved.
I want to be able to save the post
even if its related user
or topic
does not exist, and add an empty row with the in these tables having the ids that have been stored in the post
table.
Example:
Current User Table
+----+------+-----+
| id | name | age |
+----+------+-----+
| 15 | Paul | 26 |
+----+------+-----+
| 56 | John | 31 |
+----+------+-----+
current Topic Table
+----+----------+
| id | name |
+----+----------+
| 5 | Business |
+----+----------+
| 12 | General |
+----+----------+
current Post Table:
+--------+---------+----------------+-------------+-------------+
| userId | topicId | text | createdAt | updatedAt |
+--------+---------+----------------+-------------+-------------+
| 15 | 12 | blah blah blah | *timestamp* | *timestamp* |
+--------+---------+----------------+-------------+-------------+
| 56 | 5 | lorem ipsum... | *timestamp* | *timestamp* |
+--------+---------+----------------+-------------+-------------+
So then i fetch post from some sources an get a new 1 This is a new topic
posted by a user with id 72 in a topic with id 2. The source only returns the id, and to obtain the rest of the details of the user, i should make another request to their api.
Post Table after:
+--------+---------+---------------------+-------------+-------------+
| userId | topicId | text | createdAt | updatedAt |
+--------+---------+---------------------+-------------+-------------+
| 15 | 12 | blah blah blah | *timestamp* | *timestamp* |
+--------+---------+---------------------+-------------+-------------+
| 56 | 5 | lorem ipsum... | *timestamp* | *timestamp* |
+--------+---------+---------------------+-------------+-------------+
| 72 | 2 | This is a new topic | *timestamp* | *timestamp* |
+--------+---------+---------------------+-------------+-------------+
User Table After:
+----+------+-----+
| id | name | age |
+----+------+-----+
| 15 | Paul | 26 |
+----+------+-----+
| 56 | John | 31 |
+----+------+-----+
| 72 | | |
+----+------+-----+
Topic Table after
+----+------------+
| id | name |
+----+------------+
| 2 | |
+----+------------+
| 5 | Business |
+----+------------+
| 12 | General |
+----+------------+
So now that i have this, i can make my request to their api and look for data for user with id 72 and data for topic with id 2.
user2707590
(111 rep)
Jul 20, 2016, 07:52 AM
• Last activity: Jul 9, 2025, 05:02 PM
3
votes
1
answers
188
views
Primary key addition for a table which contatins no data is too slow
We are using mysql. We are running 10 alter queries which contains 3 Primary key deletion and 4 column addition and 3 primary key addition. And this alter query is on three tables which don't contain any data. To run this query system took 15s. Usually system wont take this much for alter query. But...
We are using mysql. We are running 10 alter queries which contains 3 Primary key deletion and 4 column addition and 3 primary key addition. And this alter query is on three tables which don't contain any data. To run this query system took 15s. Usually system wont take this much for alter query. But for a table which contains no data I am not sure why it is taking this much of time.
I am not sure whether it is because of primary key addition and deletion or any mysql configuration is wrong or because of the machine.
While this migration alone system is slow. After and before the migration is system is behaving as normal.
Can anyone help me on this one?
Below are the queries. I was running three sets like this.
ALTER TABLE Table1 DROP PRIMARY KEY;
ALTER TABLE Table1 ADD Column3 INT(10) DEFAULT ‘1';
ALTER TABLE Table1 ADD PRIMARY KEY Table_PK(Column1, Column2 , Column3 );
user636856
(53 rep)
Jan 9, 2016, 03:10 PM
• Last activity: Jun 24, 2025, 10:04 PM
1
votes
1
answers
189
views
Column containing keys from columns of another table
I have two csv files, countries and birth_rate_stat with the following data: | ISO | ISO3 | ISO_CODE | FIPS | Display Name | Currency Name | Phone_Code | Region Code | |---------------|--------------|---------------|-------------|------------------|--------------------|------------------|-----------...
I have two csv files, countries and birth_rate_stat with the following data:
| ISO | ISO3 | ISO_CODE | FIPS | Display Name | Currency Name | Phone_Code | Region Code |
|---------------|--------------|---------------|-------------|------------------|--------------------|------------------|------------------|
| AF | AFG | 4 | AF | Afghanistan | Afghani | 93 | Asia |
| AX | ALA | 248 | | Aland Islands | Euro | 340 | Europe |
| country_code | country_name | year | birth_rate |
|-------------------|-----------------|------------|-----------------|
| SI | Slovenia | 2036 | 7.59 |
| SI | Slovenia | 2022 | 7.52 |
and i created two tables
CREATE TABLE
countries
(
iso
varchar(2) NOT NULL,
iso3
varchar(3) DEFAULT NULL,
iso_code
int(11) NOT NULL DEFAULT 0,
fips
varchar(2) DEFAULT UNIQUE,
display_name
varchar(255) DEFAULT NULL,
currency_name
varchar(255) DEFAULT NULL,
phone_code
int(11) DEFAULT NULL,
region_code
int(11) DEFAULT NULL,
PRIMARY KEY (iso
)
)ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE birth_rate_stat
(
iso
varchar(2) NOT NULL,
year
int(11) NOT NULL,
crude_birth_rate
float NOT NULL,
PRIMARY KEY (iso
,year
),
CONSTRAINT crude_birth_rate_ibfk_1
FOREIGN KEY (iso
) REFERENCES countries
(iso
)
)ENGINE=InnoDB DEFAULT CHARSET=latin1;
The problem that i face while trying to connect the birth_rate_stat table with the countries one is that, in the birth_rate_stat table, the column country_code contains entries from both the ISO
and the FIPS
columns from the table countries. That means I can't set iso
as a primary key nor fips
as fips
contains null values. I tried to create a country_id
column which would distinguish the countries but then i get an error Cannot add or update a child row: a foreign key constraint fails
which makes sense since it doesn't exist in birth_rate_stat.
Do you have some suggestions as to how i could connect the two tables?
Red Tornado
(11 rep)
Apr 17, 2023, 02:59 PM
• Last activity: Jun 20, 2025, 06:01 PM
1
votes
1
answers
233
views
Should I be using a non-clustered primary key on multiple columns?
I have a very large table that will just continue to get larger. It contains hourly interval meter usage data for around 22,000 meters. So, everyday 22,000 * 24 hours = 528,000 records are created. The current plan is to load new interval data into the table on a monthly basis. One problem I'm havin...
I have a very large table that will just continue to get larger. It contains hourly interval meter usage data for around 22,000 meters.
So, everyday 22,000 * 24 hours = 528,000 records are created. The current plan is to load new interval data into the table on a monthly basis.
One problem I'm having is that there's really no unique identifier. So, I'm not sure how to best setup the Primary Key.
Here's an example of data for 24 hours for two meters:
|ServiceLocation|MeterNumber|IntervalDay|IntervalHour|Demand|
|-----|-----|----------|-|----|
|111111|22222|2013-01-21|1|0.01|
|111111|22222|2013-01-21|2|0.01|
|111111|22222|2013-01-21|3|0.01|
|111111|22222|2013-01-21|4|0.01|
|111111|22222|2013-01-21|5|0.01|
|111111|22222|2013-01-21|6|0.01|
|111111|22222|2013-01-21|7|0.02|
|111111|22222|2013-01-21|8|0.02|
|111111|22222|2013-01-21|9|0.03|
|111111|22222|2013-01-21|10|0.03|
|111111|22222|2013-01-21|11|0.03|
|111111|22222|2013-01-21|12|0.04|
|111111|22222|2013-01-21|13|6.55|
|111111|22222|2013-01-21|14|6.39|
|111111|22222|2013-01-21|15|7.70|
|111111|22222|2013-01-21|16|8.52|
|111111|22222|2013-01-21|17|8.85|
|111111|22222|2013-01-21|18|6.88|
|111111|22222|2013-01-21|19|5.90|
|111111|22222|2013-01-21|20|5.90|
|111111|22222|2013-01-21|21|5.90|
|111111|22222|2013-01-21|22|6.06|
|111111|22222|2013-01-21|23|5.40|
|111111|22222|2013-01-21|24|5.73|
|555555|33333|2013-01-21|1|0.01|
|555555|33333|2013-01-21|2|0.01|
|555555|33333|2013-01-21|3|0.01|
|555555|33333|2013-01-21|4|0.01|
|555555|33333|2013-01-21|5|0.01|
|555555|33333|2013-01-21|6|0.01|
|555555|33333|2013-01-21|7|0.02|
|555555|33333|2013-01-21|8|0.02|
|555555|33333|2013-01-21|9|0.03|
|555555|33333|2013-01-21|10|0.03|
|555555|33333|2013-01-21|11|0.03|
|555555|33333|2013-01-21|12|0.04|
|555555|33333|2013-01-21|13|6.55|
|555555|33333|2013-01-21|14|6.39|
|555555|33333|2013-01-21|15|7.70|
|555555|33333|2013-01-21|16|8.52|
|555555|33333|2013-01-21|17|8.85|
|555555|33333|2013-01-21|18|6.88|
|555555|33333|2013-01-21|19|5.90|
|555555|33333|2013-01-21|20|5.90|
|555555|33333|2013-01-21|21|5.90|
|555555|33333|2013-01-21|22|6.06|
|555555|33333|2013-01-21|23|5.40|
|555555|33333|2013-01-21|24|5.73|
And an example query that we would execute is:
SELECT
IntervalDay,
SUM(Demand)
FROM
LoadData
WHERE
MeterNumber = '33333' AND
IntervalDay >= '2013-01-21' AND
IntervalDay < '2013-01-22'
GROUP BY
IntervalDay
And a lot of times we'd aggregate using the
IntervalHour
field too, or maybe not specify a MeterNumber in the WHERE clause to get all meter usage.
I'm having trouble figuring out what type of primary key I should have for this large table?
It currently has a non-clustered Primary key on the ServiceLocation, MeterNumber, IntervalDay, and IntervalHour fields. Does this make sense? There is no clustered index at this time.
The ServiceLocation is like the ID for the physical property, so it will never change for a location. However, the MeterNumber can change (ex, meter fails and needs replaced), but there will only ever be one MeterNumber at a ServiceLocation.
anon
(11 rep)
May 11, 2022, 05:33 PM
• Last activity: Jun 20, 2025, 11:00 AM
2
votes
1
answers
278
views
Change Tracking and Primary keys
I have a SQL Server database with Change Tracking enabled on a hand full of tables and it's used by another application on the cloud to sync data between both databases. When an existing customer record in SQL Server needs to be synced to the cloud, for the first time, it's important that the order...
I have a SQL Server database with Change Tracking enabled on a hand full of tables and it's used by another application on the cloud to sync data between both databases.
When an existing customer record in SQL Server needs to be synced to the cloud, for the first time, it's important that the order in which data is synced is done correctly as it would not make sense to sync the address data first without its pre-existing customer record (in the cloud), so I've written code to sync data for existing customers in a specific order so as to not cause sync errors at the destination cloud data app.
I don't know what columns within the eight tables need to be synced, so I've just been updating one column, the primary key and upon checking the system data captured by Change Tracking, I've noticed that ALL columns within a table are showing as "updated".
This is good because it has saved me a lot of scripting, I mean some of the tables are wide.
I assumed that this is the case because a clustered index PK is the data so updating the PK to the same value would cause all columns to update too.
Am I right to assume this?
Interestingly, if I update a the value to a PK that is NOT clustered, it has the same effect, although the table does have a clustered index but it's not the PK.
I was expecting one column, the PK to be the only column to be updated, but I am not complaining, just need to verify that updating the PK is a shortcut to scripting updates for all columns.
MindDData
(123 rep)
Jun 18, 2025, 06:43 AM
• Last activity: Jun 18, 2025, 08:34 AM
3
votes
1
answers
8396
views
PostgreSQL SELECT primary key as "serial" or "bigserial"
I've meshed together a way to determine what the `data_type` is as in the `data_type` you use in the syntax when creating a new table based off of the [PostgreSQL wiki][1] page. If there is something wrong with my query I need to actually know *what* in a given scenario would throw it off on the exp...
I've meshed together a way to determine what the
data_type
is as in the data_type
you use in the syntax when creating a new table based off of the PostgreSQL wiki page.
If there is something wrong with my query I need to actually know *what* in a given scenario would throw it off on the explicit context of having a query or queries to run on a purely test database/table to modify that database/table so run this query on in order to test for any false-positives.
SELECT pg_attribute.attname,
format_type(pg_attribute.atttypid, pg_attribute.atttypmod),
CASE
WHEN format_type(pg_attribute.atttypid, pg_attribute.atttypmod)='bigint' THEN 'bigserial'
WHEN format_type(pg_attribute.atttypid, pg_attribute.atttypmod)='integer' THEN 'serial'
END AS type
FROM pg_index, pg_class, pg_attribute
WHERE pg_class.oid = 'delete2'::regclass
AND indrelid = pg_class.oid
AND pg_attribute.attrelid = pg_class.oid
AND pg_attribute.attnum = any(pg_index.indkey)
AND indisprimary;
Here is a table with a primary key that does not return the primary key with this query:
CREATE TABLE delete_key_bigserial (
test1 integer,
id bigserial NOT NULL,
col1 text,
col2 text,
test2 integer
);
John
(769 rep)
Jan 29, 2015, 02:30 PM
• Last activity: May 30, 2025, 12:44 AM
0
votes
1
answers
319
views
Returning auto-generated primary key after SQL insertion
I am trying to submit data to a database and then have it return the value of the user_id (primary key) created by the insertion, but I can't get it to return the value. I use the same query in phpAdmin and it works just fine. I know this should be simple, but I just can't get it. Here is my code: (...
I am trying to submit data to a database and then have it return the value of the user_id (primary key) created by the insertion, but I can't get it to return the value. I use the same query in phpAdmin and it works just fine. I know this should be simple, but I just can't get it. Here is my code:
(I do have values to connect to the database, just didn't include them for privacy.)
$dbc = @mysqli_connect (DB_HOST, DB_USER, DB_PASSWORD, DB_SITE)
OR die ('Could not connect to MYSQL: ' . mysqli_connect_error() );
$nameQuery="INSERT into nameReunion
(username, password,email,firstName,middleName,lastName, maidenName, lastUpdate)
VALUES
('$username', '$password', '$email', '$fname', '$miname', '$lname', '$maname', CURDATE());";
$r = @mysqli_query ($dbc, $nameQuery);
if(!$r) {
echo "The database was not updated.";
} else {
echo 'The connection was successful
'; }; $getUserID="SELECT user_id FROM nameReunion WHERE username='$username';"; $r2=@mysqli_query ($dbc, $getUserID); echo $r2; echo $getUserID; Please help. I have spent wayyy too much time on something I fear should be very simple.
'; }; $getUserID="SELECT user_id FROM nameReunion WHERE username='$username';"; $r2=@mysqli_query ($dbc, $getUserID); echo $r2; echo $getUserID; Please help. I have spent wayyy too much time on something I fear should be very simple.
CR_BU
(17 rep)
Apr 5, 2015, 02:54 AM
• Last activity: May 13, 2025, 07:03 PM
0
votes
1
answers
286
views
Database Design - multiple tables
I am trying to build a database that has all of our employees, their skills, and the level of their skills. The idea is that a manager could go in, type in a skill, and relevant employees are returned. I am finding that I am getting stuck on PK's and FK's - and really how to go about it altogether....
I am trying to build a database that has all of our employees, their skills, and the level of their skills. The idea is that a manager could go in, type in a skill, and relevant employees are returned.
I am finding that I am getting stuck on PK's and FK's - and really how to go about it altogether.
So far, I believe I need to create 3 tables (Employees, Skills that will be separated into 4 general areas, and Skill Level which will be 5 different options).
With that, I am at a loss for how I can connect them in a way that each employee can be attached with multiple skills that also shows at what level they're at.
Not necessarily looking for someone to do the work for me, but if someone can clarify whether or not this logic will work and how to get on track - I probably can research the sql code for it.
Thanks!
---EDIT----
Someone raised a good point. A many-to-many relationship seems like the best, but the skills table is something I am not understanding.
If I create a table/entity [Skills] that has the four attributes (financial, technology, creative, other) with specifics listed in each column (i.e. financial would have accounting, recon, etc. and so on), would the skill level be another table?
OR
Would there be separate tables for the four skills and the specifics listed as columns with the skill levels under each?
I'm going crazy trying to figure out the best way to go out this.
Chanel Tapper
(1 rep)
Jan 9, 2019, 06:59 PM
• Last activity: May 10, 2025, 11:05 AM
1
votes
1
answers
69
views
For two oracle tables is it possible to keep their primary keys in separate integer ranges?
For two Oracle database tables in the same schema, is it possible to keep their primary keys totally separate, so that an integer representing a primary key would appear in **at most one of these tables**? If so, how is this done?
For two Oracle database tables in the same schema,
is it possible to keep their primary keys totally separate, so that an integer representing a primary key would appear in **at most one of these tables**?
If so, how is this done?
JosephDoggie
(203 rep)
Apr 10, 2025, 06:55 PM
• Last activity: May 7, 2025, 01:02 PM
2
votes
1
answers
831
views
Mysql - Table structure to hold history log without any primary key
I have mysql table that holds log data for some products, this data are deleted and inserted by a service every 5 minutes. This is the table: | id | product_id | dev_stage_id | production_order | log | log_type | date Right now `id` is the auto-increment primary key but is in fact useless as we neve...
I have mysql table that holds log data for some products, this data are deleted and inserted by a service every 5 minutes.
This is the table:
| id | product_id | dev_stage_id | production_order | log | log_type | date
Right now
id
is the auto-increment primary key but is in fact useless as we never use it. Every other column is not unique, so we cannot have a different composite primary key.
As I said before a service delete all rows by product_id
and dev_stage_id
every 5 minutes and then new data is created. This means id
gets very big very quickly.
The data are later used for display using a simple query like:
SELECT * FROM table where product_id = 1231 and dev_stage_id = 233
Is it ok in our case to just remove the id
column and have no primary key or should we have a different structure ?
SilentRage47
(121 rep)
May 26, 2022, 09:54 AM
• Last activity: May 7, 2025, 12:08 AM
2
votes
3
answers
6697
views
Does a composite primary key break the first normal form?
Does having a composite/compound key (using multiple columns/attributes) to form the primary key break first normal form? (I am assuming that this is handled by the DBMS, and so the attribute containing composite key does not exist in the table/relation itself, and so this still complies with the fi...
Does having a composite/compound key (using multiple columns/attributes) to form the primary key break first normal form? (I am assuming that this is handled by the DBMS, and so the attribute containing composite key does not exist in the table/relation itself, and so this still complies with the first normal form, but is this correct?) thanks.
Daniel Valland
(425 rep)
Nov 29, 2015, 08:00 PM
• Last activity: Apr 19, 2025, 08:50 PM
1
votes
1
answers
1047
views
Duplicate rows with primary key constraint in PostgreSQL 10.6
I have a table that's had `CONSTRAINT table_pkey PRIMARY KEY (id)` since setup. However, I recently found that I have around 100 duplicate rows (I ran `select count(*) from (select count(*) from table group by id having count(*) > 1) as t1` as per [this old stack post][1]. However, I'm running Postg...
I have a table that's had
CONSTRAINT table_pkey PRIMARY KEY (id)
since setup.
However, I recently found that I have around 100 duplicate rows (I ran select count(*) from (select count(*) from table group by id having count(*) > 1) as t1
as per this old stack post . However, I'm running Postgres 10.6, which should be far after that bug.
Any ideas?
mckennab
(111 rep)
Jul 3, 2019, 11:33 PM
• Last activity: Apr 18, 2025, 04:05 AM
1
votes
1
answers
5305
views
Updating table basing on composite primary keys from derived table
I have two tables `TableA` and `TableB` defined as followings: `TableA` has 4 columns `id1` (Primary key), `id2` (Primary key), `code`, `enable`. `TableB` has 3 columns `id` (Primary key), `code`, `loc`. `TableB.code` corresponds to `TableA.code`. I would like to update `TableA.enable` for the entry...
I have two tables
TableA
and TableB
defined as followings:
TableA
has 4 columns id1
(Primary key), id2
(Primary key), code
, enable
.
TableB
has 3 columns id
(Primary key), code
, loc
.
TableB.code
corresponds to TableA.code
.
I would like to update TableA.enable
for the entry with TableB.loc
equals to foo
in MySQL. Under general conditions this should do the work:
UPDATE TableA
JOIN TableB
ON TableA
.code
= TableB
.code
SET TableA
.enable
= 1
WHERE TableB
.loc
= 'foo'
But in MySQL Workbench I got the following error: (since TableB.loc
is not a primary key)
> Error Code: 1175. You are using safe update mode and you tried to
> update a table without a WHERE that uses a KEY column. To disable
> safe mode, toggle the option in Preferences -> SQL Editor and
> reconnect.
So I attempted to nest a query to fetch the two primary key values from TableA
and update TableA
:
UPDATE TableA
SET TableA
.enable
= 1
WHERE EXISTS (
SELECT * FROM (
SELECT
TableA
.id1
AS id1
,
TableA
.id2
AS id2
FROM TableA
LEFT JOIN TableB
ON
TableA
.code
= TableB
.code
WHERE
TableB
.loc
= 'foo'
) t
WHERE
t.id1
= TableA
.id1
AND
t.id2
= TableA
.id2
);
But it throws out the same error, so I tried to see if I can use the primary key column in my WHERE
statement.
I also tried this query:
UPDATE TableA
SET TableA
.enable
= 1
WHERE (
(TableA
.id1
, TableA
.id2
) = (
SELECT dummy
.id1
, dummy
.id2
FROM (
SELECT
TableA
.id1
AS id1
,
TableA
.id2
AS id2
FROM TableA
LEFT JOIN TableB
ON
TableA
.code
= TableB
.code
WHERE
TableB
.loc
= 'foo'
) AS dummy
)
);
But it also throws the error 1175.
On the other hand, I tried something like this:
UPDATE TableA
SET TableA
.enable
= 1
WHERE (
TableA
.id1
= (
SELECT dummy
.id1
FROM (
SELECT
TableA
.id1
AS id1
,
TableA
.id2
AS id2
FROM TableA
LEFT JOIN TableB
ON
TableA
.code
= TableB
.code
WHERE
TableB
.loc
= 'foo'
) AS dummy
) AND
TableA
.id2
= (
SELECT dummy
.id2
FROM (
SELECT
TableA
.id1
AS id1
,
TableA
.id2
AS id2
FROM TableA
LEFT JOIN TableB
ON
TableA
.code
= TableB
.code
WHERE
TableB
.loc
= 'foo'
) AS dummy
)
);
It works, but by repeating the subquery for dummy
twice the code would be very clumsy. **It is obvious to me that I can disable the safe update mode to perform this query**, but I just hope to know if there are any methods for matching the rows using the two primary keys (TableA.id1
and TableA.id2
) without repeating the dummy
derived table and **without disabling the safe update mode**?
HYChan
(9 rep)
Sep 19, 2019, 10:02 AM
• Last activity: Apr 14, 2025, 04:11 AM
34
votes
4
answers
31455
views
Retrieving all PK and FK
I have a big database that I need to extract all primary keys and foreign keys from each table. I have pgAdmin III. Is there a way to do this automatically and not go over each table manually?
I have a big database that I need to extract all primary keys and foreign keys from each table.
I have pgAdmin III.
Is there a way to do this automatically and not go over each table manually?
Nick Ginanto
(1009 rep)
Mar 19, 2013, 07:24 AM
• Last activity: Apr 3, 2025, 10:15 PM
1
votes
1
answers
412
views
Ensure same tenant in a Many to Many relationship
I am trying to draft a very simple data model for a multi-tenant database. So far I have only 3 tables: - **company**: pretty much my tenant - **account**: is part of one company and zero or more groups - **account_group**: same as account: one company and zero or more accounts Here is the SQL: CREA...
I am trying to draft a very simple data model for a multi-tenant database.
So far I have only 3 tables:
- **company**: pretty much my tenant
- **account**: is part of one company and zero or more groups
- **account_group**: same as account: one company and zero or more accounts
Here is the SQL:
CREATE TABLE company (
name TEXT NOT NULL,
PRIMARY KEY (name)
);
CREATE TABLE account (
email_address TEXT NOT NULL,
company_name TEXT NOT NULL,
PRIMARY KEY (email_address),
FOREIGN KEY (company_name) REFERENCES company (name)
);
CREATE TABLE account_group (
name TEXT NOT NULL,
company_name TEXT NOT NULL,
PRIMARY KEY (name, company_name),
FOREIGN KEY (company_name) REFERENCES company (name)
);
CREATE TABLE account_group_membership (
account_group_name TEXT NOT NULL,
account_group_company_name TEXT NOT NULL,
account_email_address TEXT NOT NULL,
PRIMARY KEY (account_group_name, account_group_company_name, account_email_address),
FOREIGN KEY (account_group_name, account_group_company_name) REFERENCES account_group (name, company_name),
FOREIGN KEY (account_email_address) REFERENCES account (email_address)
);
In this specific case, there is no constraint between the Account Company and Group Company.
After doing some search, I found this question and tried to add the following to my
account_group_membership
table;
CREATE TABLE account_group_membership (
account_group_name TEXT NOT NULL,
account_group_company_name TEXT NOT NULL,
account_email_address TEXT NOT NULL,
account_company_name TEXT NOT NULL,
PRIMARY KEY (account_group_name, account_group_company_name, account_email_address),
FOREIGN KEY (account_group_name, account_group_company_name) REFERENCES account_group (name, company_name),
FOREIGN KEY (account_email_address, account_company_name) REFERENCES account (email_address, company_name),
CONSTRAINT same_company CHECK (account_group_company_name == account_company_name)
);
Unfortunately this doesn't work since the company_name
column of the account
table is not unique.
I cannot:
1. Make the column company_name
unique since multiple accounts can be from the company.
2. Use email_address
and company_name
as primary key for an account
. The email_address
should be unique in the whole table. Also account
will mostly be queried by email_address
and from time to time by company_name
.
3. See any other solution actually....
So is there any simple way to do this (simple as in easy to understand and maintain) ?
Alternatively, does having this problem, mean that there is something wrong going on in the design ?
ITChap
(229 rep)
Apr 6, 2017, 04:44 AM
• Last activity: Mar 19, 2025, 06:36 AM
4
votes
1
answers
1485
views
AWS Aurora Mysql seemingly not picking PRIMARY or secondary index
Relevant System Info: Aurora Mysql 8.0.mysql_aurora.3.03 rg6.xl instances (1 writer, 2 read replicas) Total size:5.5TB (all databases combined, or just looking at the most recent Snapshot) I have been working on migrating mariadb databases deprecated tokudb engine to RDS Aurora Mysql. With some fine...
Relevant System Info:
Aurora Mysql 8.0.mysql_aurora.3.03
rg6.xl instances (1 writer, 2 read replicas)
Total size:5.5TB (all databases combined, or just looking at the most recent Snapshot)
I have been working on migrating mariadb databases deprecated tokudb engine to RDS Aurora Mysql. With some fine tuning of the parameters in RDS, there has been one behavior I cannot understand.
**Scenario**
The main database houses tables created by each year, so lets use TABLE_2023 as an example. In this table, there are two indexes; the primary and the secondary index. PRIMARY index(ID, DATE) and Secondary(DATE). If I take a query like the following, and thrown an
select count(*) on this year's table brings back 1809962901 rows.
**CREATE TABLE Results**
CREATE TABLE
Query Results
1822 rows in set (52.513 sec)
So, again, same behavior. Before I reverse the Primary Key order (as I did before), waiting to see response from Rolando for any further suggestions.
**Update as of 05/16/2023**
I have another meeting planned with AWS,but this time with an Aurora Mysql Specialist. Before I have the meeting, they suggested to turn off aurora_parallel_query.
Upon doing this, I ran the same example query and was shocked to see the results. The Explain results showed
|id|select_type|table|partitions|type|possible_keys|key|key_len|ref|rows|filtered|Extra|
|--|-----------|-----|----------|----|-------------|---|-------|---|----|--------|-----|
|1|SIMPLE|TABLE_2023||range|PRIMARY,time_Something_ID|PRIMARY|11||1823|100.0|Using where|
And the query itself, when ran, completed extremely quickly.
However, before I write this off as the answer, I am curious to why this is the solution. Amazon markets Aurora Parallel Query as a benefit for moving to Aurora, so my use case must not benefit from this. I will post the details of the meeting here when I have them.
**Update as of 05/25/2023**
Same behavior with 3.03.1. Sending AWS a snapshot of our TABLE_2023 with a bug report.
**Update as of 06/12/2023**
AWS internally identified the "bug" and the fix is set to be released in the public versions of 3.04.0 and 3.03.2. These are projected to come out at the end of this quarter, or the beginning of next quarter.
, the output shows NULL for the key column. I found this odd since if I run the same query, but against the existing mariadb server, it will output with one of the indexes in that column. If I force the query to use an index (SELECT * FROM TABLE_2023 FORCE INDEX(PRIMARY) WHERE ID = '3' AND TIME >=202301010000 AND = 202304170000
AND TIME Filter: ((TABLE_2023.TIME >= 202304170000) and(TABLE_2023.TIME Table scan on TABLE_2023 [Using parallel query (6 columns)] (cost=404.85 rows=2296760704) (actual time=18271.459..244721.615 rows=1822 loops=1) [parallel query actual (total time= 244722.063, rows returned= 1822)]


TABLE_2023
(
SOMETHING_ID
mediumint NOT NULL,
TIME
bigint unsigned NOT NULL,
COLUMN3
tinyint DEFAULT NULL,
COLUMN4
smallint DEFAULT NULL,
COLUMN5
smallint DEFAULT NULL,
JSON
varchar(2048) CHARACTER SET latin1 COLLATE latin1_swedish_ci DEFAULT NULL,
PRIMARY KEY (SOMETHING_ID
,TIME
),
KEY TIME
(TIME
)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
**API Behaviors**
Users that use the api can request a range of data. However, the query behavior is almost always asking the database for time series data for either an single ID or multiple IDs for the same time series. Currently, the code behind the API uses the "TIME>='' and TIME<=''" for a time range, not BETWEEN. Nor does it use "ID='1'" for a single station, but rather "in". I have recommended the developers to change the code to use "=" if one station is used, to avoid the table scan.
**Final Thoughts**
Can we get away with forcing the index we want the query to use? Of course. But the behavior we are seeing with Aurora Mysql not being able to pick the index without being forced to bothers me. Especially when I do the same queries against our current databases environment (which will use an index, regardless of being forced). The only time I am able to get an index used is by using simple queries;
EXPLAIN select * from TABLE_2023 where SOMETHING_ID in (1206); --index PRIMARY "used"
EXPLAIN select * from TABLE_2023 where SOMETHING_ID = '1206'; --index PRIMARY "used"
EXPLAIN SELECT * FROM TABLE_2023 where TIME = '202304170000'; --index TIME "used"
**Update as of 05/01/2023**
I dropped the year table (2023) and reimported all the data from our source database using DMS (again). After this was done, I added the secondary index (TIME) since DMS will usually only grab the primary index. After this was done, I took the same example query and still my EXPLAIN results show no usage of an index.
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html
I came across this document when reviewing the methods I was loading data into tables. Turns out AWS recommends dropping Primary Index if doing a full table load. Going to give this a try and load the data, then re-add just the PRIMARY index.
**Update as of 05/02/2023**
After creating a new table w/o the PRIMARY index and secondary index, I loaded the data into the table via DMS. Once completed, I waited 6+ hours for the PRIMARY key index to create. Unfortunately, the behavior still exists. However, I do have a scheduled call with AWS reps that I hope to get an answer for this behavior. Once an answer is vetted, I will post here.
**Update as of 05/05/2023**
After meeting with AWS account managers, they will be relaying technical information to their engineers to take a look at. In the meantime, I conducted a few more tests:
*Scenario 1:*
Create table with primary key index only
Load 38,000 rows, run 'EXPLAIN' with the example query-primary index would have been used.
*Scenario 2:*
Create table with primary key index only
Load 23 million rows, run 'EXPLAIN' with the example query-primary index identified, but not used
*Scenario 3:*
Drop primary index in table from Scenario 2
Re-add primary index, run 'EXPLAIN' with the example query-primary index identified, but not used
*Scenario 4:*
Create table without primary key index
Load 23 million rows, add primary key index, run 'EXPLAIN' with the example query-primary index identified, but not used
*Scenario* 5:
Create table without primary key index
Load 38,000 rows, add primary key, run 'EXPLAIN' with example query-primary index identified and used
*Scenario 6:*
Create table without primary key index
Load around 12 million rows, add primary key, run 'EXPLAIN' with example query-primary index identified but not used.
**Update as of 05/08/2023**
Tried suggested answer, same behavior. Ended up doing the following but AWS Aurora MySQL behaved the same
ALTER TABLE TABLE_2023
ADD PRIMARY KEY (SOMETHING_ID
,TIME
),
ADD UNIQUE KEY TIME_SOMETHING_ID
(TIME
,SOMETHING_ID
);
**Update as of 05/10/2023**
As suggested Rolando, I dropped the TIME index and added the following
ALTER TABLE TABLE_2023
ADD UNIQUE INDEX TIME_SOMETHING_ID
(TIME
,SOMETHING_ID
);
I then ran the example query in the beginning of this post. Here are the results:
EXPLAIN


Randoneering
(135 rep)
Apr 27, 2023, 09:34 PM
• Last activity: Mar 18, 2025, 07:03 PM
Showing page 1 of 20 total questions