Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
2
answers
166
views
Does putting foreign keys in another strong entity equal bad design and what is that table?
I see so many tables designed like this: https://www.quackit.com/sql_server/sql_server_2016/tutorial/create_a_relationship_in_sql_server_2016.cfm [![enter image description here][1]][1] [1]: https://i.sstatic.net/HD0yK.png Where the foreign key is dropped into another table which in my mind is a str...
I see so many tables designed like this:
https://www.quackit.com/sql_server/sql_server_2016/tutorial/create_a_relationship_in_sql_server_2016.cfm
Where the foreign key is dropped into another table which in my mind is a strong entity in itself.
Artists (**ArtistId**, ArtistName)
Albums (**AlbumId**, AlbumName, ReleaseDate, ArtistId, GenreId)
Genres (**GenreId**, Genre)
This design is convenient but what is the Albums table? Is it strong, weak or a linking table?
Rather I would've thought a good design is this:
Artists (**ArtistId**, ArtistName)
Albums (**AlbumId**, AlbumName, ReleaseDate)
Genres (**GenreId**, Genre)
ArtistCreatedAlbum (**ArtistId**, **AlbumId**)
AlbumHasGenre (**AlbumId**, **GenreId**)
Strong Entity (Artists, Albums, Genres)
Linking Table (ArtistCreatedAlbum, AlbumIsGenre)
Even if this example doesn't make sense. So many designs drop foreign keys into a strongish entity because its convenient.
e.g
Person (id, name, email, car_id, house_id, school_id)
and in my mind that shouldn't be done.

tickmeister
(1 rep)
Apr 18, 2024, 09:40 AM
• Last activity: Jul 17, 2025, 08:00 AM
-1
votes
1
answers
167
views
What data "MUST" be stored inside a "relational database"?
I am working on a mobile/web app like Instagram. Considering that, I want to know for what part of the app it's better to use `relational` and for what other parts `non-relational` databases. I have chosen `MySQL` and `Cassandra` databases and this is my research result so far: `-Relational Database...
I am working on a mobile/web app like Instagram. Considering that, I want to know for what part of the app it's better to use
relational
and for what other parts non-relational
databases.
I have chosen MySQL
and Cassandra
databases and this is my research result so far:
-Relational Databases
:
- For services we need as much as possible consistency like payment
service or ordering
and authentication
services.
- Non-relational Databases
:
- For services we need to summarize a lot of different data in a small table instead of a lot of columns with null values, like user
services that we have a lot of personal settings.
- For the time we need horizontal scalability and want more distributed system over different datacenters/clouds.
- For faster read/write heavy systems.
But I am still wondering to know:
1- What database is more appropriate for upload files(images, videos. documents) on it?
2- What database is more sufficient for posts/comments/likes, list of friends, and other user related things that they are also related to other users. (***I mean a post or a comment is related to one user but all the other users can also see it, and may affect them***).
best_of_man
(117 rep)
Dec 25, 2022, 06:00 PM
• Last activity: Jul 12, 2025, 11:05 AM
0
votes
2
answers
211
views
1 to 0 or 1 relationship when both entities are optional
In business process which I'm trying to model there are 2 entities which are in 1 to 1 or 0 relationship from both sides, for example: entity_1 might or might not produce one and only one entity_2 record, and entity_2 can be produced without entity_1 participates in the process. So my question is ho...
In business process which I'm trying to model there are 2 entities which are in 1 to 1 or 0 relationship from both sides, for example:
entity_1 might or might not produce one and only one entity_2 record, and entity_2 can be produced without entity_1 participates in the process.
So my question is how that case should be modeled? All I found for now is rule that optional entity is child in 1 to 0 or 1 relationships. But what do I do if both are optional?
My current assumption that I can make parent either of them, which I think more suitable.
bst19
(1 rep)
Jan 24, 2024, 06:52 PM
• Last activity: Jul 2, 2025, 08:46 AM
0
votes
0
answers
43
views
Is there database design pattern that calls for the auxiliary table to store sequence numbers and how else can be applied?
I have table: blocks: id integer not null primary key, ref_date date not null, seq_no integer not null, lk_document integer not null there can be multiple records for each lk_document and all those records should have seq_no assigned according to `ref_date asc id asc` order in the dense manner, i.e....
I have table:
blocks:
id integer not null primary key,
ref_date date not null,
seq_no integer not null,
lk_document integer not null
there can be multiple records for each lk_document and all those records should have seq_no assigned according to
ref_date asc id asc
order in the dense manner, i.e. 1, 2, 3... without gaps. I.e. 1, 2, 4, 6 is not allowed.
Is there design pattern for the DB schema design that organizes how seq_no is updated upon the changes of ref_date (including upone the insert or delete of records, which, of course, have ref_date and which can leave open gaps).
I feel, that the robust solution would be to more seq_no into the separate table:
blocks_ext
id integer not null primary key (possible referent to blocks.id)
seq_no integer not null
While I see a lot of advantages and the solution can be pretty robust (maintained by the after update/insert/delete triggers on the blocks table), I am not sure if this is the design pattern and is the use case to serious as to create separte table. Maybe it is still better keep seq_no in the blocks table and come up with some ingenuity how to maintain it with the guarded before update/indert/delete triggers.
From the one side the ordered list of seq_no is the aggregate over the set of blocks record selected by lk_document and such aggregate does merit its own table or the field in the documents table (which may keep the ordered list or blocks.id entries, finite list only allowed).
From the other side - I am still hesitating is this known, recognized pattern, best approach. And is it used in other cases as well?
I am using Firebird SQL database (very old version 2.1 to be specific), but I guess - my question is general enough for the design in any SQL database. Though, more advanced databases can have triggers on 'after transaction complete' and such triggers can write back (is this so indeed?) seq_no in the same table blocks?
**Note added:** And there is still another aspect of this design using auxiliary table. My original table can have record_version field blocks.rv
which increases (from some sequence) with every update of the record (and also with every insert). Now there can be situations when there is update on blocks_ext.seq_no
due to changes in other records. So - I should add rv to blocks_ext as well and I should always select the updates on the blocks by the:
select b.id
from blocks b
left join blocks_ext e on (b.id=e.id)
where maxvalue(b.rv, e.rv)>:rv_of_last_update
But I guess, nothing to be done here. And this type of select will be require complete scan of blocks/blocks_ext, there are no index on such condition.
**Note added 2:** I guess, this solution - with the auxiliary table, is the materialization of aggregation (ordering) essentially and materialization is OK if done for the performance purposes. Regarding rv - it may be the most efficient to keep blocks_ext.rv only and update both in the case when the sequence is being updated (blocks_ext.seq_no changes) abd by the blocks rv insert/update tiggers.
TomR
(101 rep)
Mar 10, 2025, 08:49 AM
• Last activity: Mar 10, 2025, 09:54 AM
0
votes
0
answers
39
views
What would be the name of this pattern?
I have a match entity that represent a tennis match, said match can be between two players or two teams. You cant have player vs team, and a player can be in many teams. I've decided to implement this as follows: CREATE TABLE Player ( player_id INT PRIMARY KEY ); CREATE TABLE Team ( team_id INT PRIM...
I have a match entity that represent a tennis match, said match can be between two players or two teams. You cant have player vs team, and a player can be in many teams. I've decided to implement this as follows:
CREATE TABLE Player (
player_id INT PRIMARY KEY
);
CREATE TABLE Team (
team_id INT PRIMARY KEY
);
CREATE TABLE player_team (
fk_team_id INT NOT NULL,
fk_player_id INT NOT NULL,
PRIMARY KEY (fk_team_id, fk_player_id),
FOREIGN KEY (fk_team_id) REFERENCES Team(team_id),
FOREIGN KEY (fk_player_id) REFERENCES Player(player_id)
);
CREATE TABLE Match (
match_id INT PRIMARY KEY
);
CREATE TABLE Participant (
participant_id INT PRIMARY KEY,
fk_team INT NULL,
fk_player INT NULL,
fk_match INT NOT NULL,
FOREIGN KEY (fk_match) REFERENCES Match(match_id),
FOREIGN KEY (fk_team) REFERENCES Team(team_id),
FOREIGN KEY (fk_player) REFERENCES Player(player_id),
CHECK (
(fk_team IS NOT NULL AND fk_player IS NULL) OR
(fk_team IS NULL AND fk_player IS NOT NULL)
),
UNIQUE (fk_match, fk_player),
UNIQUE (fk_match, fk_team)
);
How is this pattern called? Per my google research it seems like a weird version of polymorphic association, and the Participant table is acting as a kind of junction table of the 3 entities. I'm also wondering how would I represent this on an ER diagram using Chen's notation.
Fullaccess
(1 rep)
Sep 7, 2024, 12:26 PM
2
votes
3
answers
937
views
how best to have a record versioning or tracking in relational database?
The scenario involves maintaining versions of a 'Product' object (e.g., Product table) and creating new records when the product changes to the latest version (e.g. v2 from v1). It is crucial not to update existing product records, as other objects might reference them. The goal is to update referri...
The scenario involves maintaining versions of a 'Product' object (e.g., Product table) and creating new records when the product changes to the latest version (e.g. v2 from v1). It is crucial not to update existing product records, as other objects might reference them. The goal is to update referring objects (those needing v2) while preserving the old versions (v1) for objects still using them. And at the same time, we need to keep track of how the product has evolved i.e. v1->v2->v3
I've considered two potential approaches:
1. Versioning : Assigning version labels (e.g., V1, V2) to the object to track and reference old versions if necessary.
2. LinkedList (or doubly LinkedList): Each new object includes an 'old_object_ID' pointing to the previous object's ID, enabling tracking of all past objects.
Of course, there are pros and cons of both approaches like data redundancy, complex queries, and easy navigation.
At the same time, we need to consider Query Performance, Concurrency, Referential Integrity, Auditing, and History
**Are there alternative or more effective strategies/patterns for achieving this versioning and referencing mechanism in relational databases?**
Enigma
(121 rep)
Mar 4, 2024, 09:05 AM
• Last activity: Jun 23, 2024, 07:25 AM
1
votes
0
answers
23
views
procedure that takes the distinct tables with incremented (int) key and sub the int back for the original value
I'm working on developing a set of stored procedures in PostgreSQL to abstract common operations I'm seeking advice on implementing a pattern for substituting values for `INTS` in child tables. Please let me know if you have any questions. Below is the main routine I've designed so far. Pseudocode i...
I'm working on developing a set of stored procedures in PostgreSQL to abstract common operations I'm seeking advice on implementing a pattern for substituting values for
INTS
in child tables.
Please let me know if you have any questions.
Below is the main routine I've designed so far. Pseudocode is fine;
CREATE OR REPLACE PROCEDURE your_main_routine(
p_full_column_name VARCHAR,
p_drop_original_column BOOLEAN DEFAULT FALSE,
p_set_pk BOOLEAN DEFAULT FALSE
) AS
$$
DECLARE
-- Declare variables
p_db_name VARCHAR;
p_schema_name VARCHAR;
p_table_name VARCHAR;
p_column_name VARCHAR;
p_temp_table_buffer TEXT;
run_loop BOOLEAN := TRUE; -- Initialize run_loop variable
BEGIN
-- logging to trace script execution
RAISE NOTICE 'Starting script execution with parameters: p_full_column_name: %, p_drop_original_column: %, p_set_pk: %',
p_full_column_name, p_drop_original_column, p_set_pk;
-- Parse the database, schema, table, and column names from the input string
RAISE NOTICE 'Parsing column names from input string with parameter: p_full_column_name: %', p_full_column_name;
p_db_name := split_part(p_full_column_name, '.', 1);
RAISE NOTICE 'Parsed database name: %', p_db_name;
p_schema_name := split_part(p_full_column_name, '.', 2);
RAISE NOTICE 'Parsed schema name: %', p_schema_name;
p_table_name := split_part(p_full_column_name, '.', 3);
RAISE NOTICE 'Parsed table name: %', p_table_name;
p_column_name := split_part(p_full_column_name, '.', 4);
RAISE NOTICE 'Parsed column name: %', p_column_name;
IF p_set_pk THEN
-- Call the function to create the table with PK
p_temp_table_buffer := 'temp_buffer';
-- Adjust this accordingly
-- Loop until run_loop is set to FALSE
WHILE run_loop
LOOP
-- Call the create_distinct_table procedure with your parameters
CALL create_distinct_table(p_table_name, p_column_name, p_temp_table_buffer, run_loop);
-- Pause execution for 1 second
PERFORM pg_sleep(1);
-- Set run_loop to FALSE once the procedure is done
run_loop := FALSE;
END LOOP;
run_loop := true;
-- Loop until run_loop is set to FALSE
WHILE run_loop
LOOP
-- Call the create_distinct_table procedure with your parameters
CALL add_int_column_to_table(p_temp_table_buffer, 'id', run_loop);
-- Pause execution for 1 second
PERFORM pg_sleep(1);
run_loop := FALSE;
-- Set run_loop to FALSE once the procedure is done
END LOOP;
run_loop := true;
-- Loop until run_loop is set to FALSE
WHILE run_loop
LOOP
-- Call the create_distinct_table procedure with your parameters
CALL add_sequence_to_column(p_temp_table_buffer, 'id', run_loop);
-- Pause execution for 1 seconds
PERFORM pg_sleep(1);
run_loop := FALSE;
-- Set run_loop to FALSE once the procedure is done
END LOOP;
run_loop := true;
-- Loop until run_loop is set to FALSE
WHILE run_loop
LOOP
-- Call the create_distinct_table procedure with your parameters
CALL add_sequence_to_column(p_temp_table_buffer, 'id', run_loop);
-- Pause execution for 1 seconds
PERFORM pg_sleep(1);
run_loop := FALSE;
-- Set run_loop to FALSE once the procedure is done
END LOOP;
END IF;
END;
$$ LANGUAGE plpgsql;
-- Call your main routine
CALL your_main_routine('postgres.some_schema.some_table.some_customer_id', FALSE, TRUE);
I am thinking along these lines (un tested)
CREATE OR REPLACE PROCEDURE add_foreign_key(
p_constraint_name VARCHAR,
p_table_name VARCHAR,
p_column_name VARCHAR,
p_referenced_table VARCHAR,
p_referenced_column VARCHAR
) AS
$$
BEGIN
-- Generate the SQL statement to add the foreign key constraint
EXECUTE format('ALTER TABLE %I ADD CONSTRAINT %I FOREIGN KEY (%I) REFERENCES %I(%I)',
p_table_name, p_constraint_name, p_column_name,
p_referenced_table, p_referenced_column);
END;
$$ LANGUAGE plpgsql;
and this
CREATE OR REPLACE PROCEDURE add_primary_key(p_table_name VARCHAR, p_column_name VARCHAR) AS
$$
BEGIN
-- Generate the SQL statement to add the primary key constraint
EXECUTE format('ALTER TABLE %I ADD PRIMARY KEY (%I)', p_table_name, p_column_name);
END;
$$ LANGUAGE plpgsql;
I am just trying to make the procedure that takes the distinct tables with incremented (int) key and sub the int back for the original value, then hook the above procedures in.
RandomNumberFun
(113 rep)
Mar 13, 2024, 05:02 AM
• Last activity: Mar 13, 2024, 05:44 AM
0
votes
1
answers
111
views
3NF Decomposition and NP-Completeness
I have a question about the computational complexity of the algorithm that allows us to decompose a schema into 3NF. My book says that the decisional problem of telling if a schema complies with the 3NF is NP-Complete. An NP problem is a decisional problem that can be solved in a polynomial time by...
I have a question about the computational complexity of the algorithm that allows us to decompose a schema into 3NF.
My book says that the decisional problem of telling if a schema complies with the 3NF is NP-Complete.
An NP problem is a decisional problem that can be solved in a polynomial time by a non-deterministic algorithm and verified in a polynomial time by a deterministic algorithm.
An NP-Complete problem is a problem that is as hard to solve as the SAT problem.
So, if I've understood correctly, if somebody gives me a schema and tells me wheater or not it is in 3NF I can verify it in polynomial time with a deterministic algorithm. But if somebody gives me the actual problem and not the decisional one by asking me: "Find a 3NF decomposition of my schema" I wouldn't be able to find a polynomial solution with a deterministic algorithm.
I would be glad if somebody could tell me if there are any mistakes in my understanding.
Thank you in advance!
lorenzo_moni
(1 rep)
Jun 30, 2023, 09:03 PM
• Last activity: Feb 11, 2024, 09:57 PM
0
votes
1
answers
44
views
How to model edited linked records of a multi-schema entity while preserving the original linked records?
If you have a base class `parent`. `parent` can have multiple `child` records, a one-to-many relationship. ``` parent schema: +-----------+ | parent_id | +-----------+ child schema: +-----------+----------+ | parent_id | child_id | +-----------+----------+ ``` Then `parent` is used in another entity...
If you have a base class
parent
. parent
can have multiple child
records, a one-to-many relationship.
parent schema:
+-----------+
| parent_id |
+-----------+
child schema:
+-----------+----------+
| parent_id | child_id |
+-----------+----------+
Then parent
is used in another entity, table x
, that can have many parent
s, again in a one-to-many relationship:
x_parent schema:
+------+-----------+
| x_id | parent_id |
+------+-----------+
Users can edit entities from table x
to add and remove parents. While doing this, they can also edit the child
records of the parent
, however these changes need to be saved separately (as edits of x
), not modifying the original parent
or its children
.
How should this be modeled to keep the data normalized, and also to know which records to use - the original child
records, or some edited ones? This is the trickiest part to me, as the absence of edited records could mean the user removed all the original child
records, or has simply not edited the child
records of parent
.
EG table x_child
could look like this:
+------+-----------+--------------------+
| x_id | parent_id | child_edited_value |
+------+-----------+--------------------+
Is it a good idea to stick an edited
column on the x_parent
table, indicating to query for x_child
rows instead of simply left joining x_parent
with it's child
records?
Henry
(213 rep)
Jan 9, 2024, 03:49 PM
• Last activity: Jan 15, 2024, 02:21 AM
1
votes
3
answers
53
views
Should a datbase reflect a user's actions or user's state?
We have a Postgres DB in which we store a "settings" JSON field on Users. This JSON field indicates (among other things) whether the user is subscribed to a digest we send out. There's some model validation that ensures that values in this key / value pair operate like an enum. We're going to try an...
We have a Postgres DB in which we store a "settings" JSON field on Users. This JSON field indicates (among other things) whether the user is subscribed to a digest we send out. There's some model validation that ensures that values in this key / value pair operate like an enum.
We're going to try an experiment in which we auto-subscribe users who visit the site (we know their email addresses because we're a small network).
We are debating how the user's subscription state should be modeled in the DB. One team member thinks the DB should reflect the user's *actions*, so the "subscribed" value in the JSON should remain NULL until the user explicitly changes it, and we should send emails to users who have "subscribed" == NULL and who have visited the site. Another team member thinks the DB should reflect the user's subscription *state*, so the "subscribed" value in the JSON should be set to true once the user visits the site for the first time.
Which philosophy do others in this forum think is more common and/or intuitive?
duhaime
(121 rep)
Dec 13, 2023, 06:52 PM
• Last activity: Dec 14, 2023, 07:12 AM
2
votes
2
answers
812
views
When should I reference the User record or User Profile record
I know it is a common paradigm to separate tables when building a User profile. For example, having a `user` table, and another table called `user_profile` with a foreign key to the `user` table. My understanding is that the `user` table is better for sensitive user account data or authentication da...
I know it is a common paradigm to separate tables when building a User profile. For example, having a
user
table, and another table called user_profile
with a foreign key to the user
table.
My understanding is that the user
table is better for sensitive user account data or authentication data, such as email, password, user type etc. The Profile table could have additional data of that user like first name, last name, date of birth and more.
But, what about other data that can be related to the user, and can also be modeled with a table. Some examples could be Payments and Transactions. My first guess is to link those to the profile
table and that way I don't have to make any joins just to have Transactions and the name of User together (which the user
table does not have). Linking to the profile, I have the user info and the Transactions. But then, when is it useful to link to the user
table? What are the common paradigms?
Thanks in advance!
chris
(121 rep)
May 15, 2020, 09:21 PM
• Last activity: Dec 4, 2023, 09:51 AM
1
votes
0
answers
38
views
Data Modeling Problem - Schema to store approval process
I need to design a schema to keep track of data changes (I do not need Change Tracking or Change Data Capture or Temporal Tables) to columns in a few tables. When the data is created, modified, or deleted, and when they meet a certain condition, I will have to initiate a workflow to require approval...
I need to design a schema to keep track of data changes (I do not need Change Tracking or Change Data Capture or Temporal Tables) to columns in a few tables. When the data is created, modified, or deleted, and when they meet a certain condition, I will have to initiate a workflow to require approval of those changes from groups of users. Are there any known schema design patterns for this purpose?
DBK
(378 rep)
Oct 30, 2023, 07:33 PM
1
votes
0
answers
27
views
How to optimize a system with balance and in-out TXs to avoid re-calculating user's balance?
I work on a service wherein a user will have balance, will be able to deposit and withdraw money, perform actions that generate and burn it. As the amount of the transactions grow, the amount of them involved in calculating user's balance will too. I want to optimaze it. I don't want to cache the va...
I work on a service wherein a user will have balance, will be able to deposit and withdraw money, perform actions that generate and burn it. As the amount of the transactions grow, the amount of them involved in calculating user's balance will too.
I want to optimaze it. I don't want to cache the value. Rather I want to be able to prune the system and do so properly.
Let's say, for the last 6 months a user made 100 TXs and activities, plus and minus, which in sum have result in $50 on user's balance. Since **it's already in the past**, why would I calculate it every single time **ever again?**
**Questions:**
How will I properly prune, optimaze the system such that the next time I need to calculate a balance, I'll start off with
$10
? Meaning, I'd prune it every 2-3 or 6 months, not every day or week.
How will I change the Db schema of finance_transactions
table? Should I add is_achieved
field and, once I've calculated a new balance as a new starting point, I'd achive all the existing TXs and create a new TX manually with the amount of user's balance?
nicholas.fowler
(11 rep)
Oct 14, 2023, 05:45 PM
• Last activity: Oct 14, 2023, 05:46 PM
4
votes
3
answers
460
views
For what type of data it's better to use relational, and for what type of data, non-relational databases?
I am trying to write my first big backend project. This is a mobile/web application like Instagram but for different purposes. As I searched through the internet I found that Instagram uses PostgreSQL and Cassandra as it's main databases. But I don't know for what purpose/type/part of data it uses w...
I am trying to write my first big backend project. This is a mobile/web application like Instagram but for different purposes. As I searched through the internet I found that Instagram uses PostgreSQL and Cassandra as it's main databases. But I don't know for what purpose/type/part of data it uses which database?
Does anyone know more about the databases Instagram use or in general, may I know how to decide for what services or what type of data/application it's better to use SQL or NoSQL databases?
user20551429
(69 rep)
Dec 1, 2022, 05:16 PM
• Last activity: Dec 5, 2022, 06:59 PM
1
votes
1
answers
802
views
Adding soft delete to a database after having used hard delete
**Introduction** My app collects data from a centralized source where many different users can submit data about their organisation and their staff. Previously we used to just hard delete a users data when they were no longer relevant from the source of truth because it used to be reliable. But a ch...
**Introduction**
My app collects data from a centralized source where many different users can submit data about their organisation and their staff.
Previously we used to just hard delete a users data when they were no longer relevant from the source of truth because it used to be reliable.
But a change to some software the clients use, messes with everything. They now DELETE all their data multiple times per month when they submit data. This is by mistake and due to a terrible design.
Which means they lose the data for the users in our system and have to re-enter parts of it.
The software they use are stubborn and won't change the behaviour. We have tried educating the users about how to use it, but they don't learn. So now the last option is to soft delete the data for a certain time period.
Having looked at multiple Stack Overflow posts and blogs around the web, I don't really fancy any of the options, IE. add a column to the tables that need to be soft deleted.
I started looking because that was my first instinct as well but don't really like it and the implications.
I was wondering if you could give me some feedback on a different idea. I have no experience with maintaining soft deletion and I don't know if my thought is terrible.
**Diagram and relations**
There is a user, their unique identifier is the same across multiple orgs.
Per user affiliation with an org they have some userinformation like name, title etc.
In our system they have one status row because it is the same in our app no matter what org they choose to connect as.
So if I follow the conventional way, of adding columns for soft deleting I would have to add one to each of the unique tables that contains user data, because their affiliation to a certain org might be deleted but as a user they still live on in our system from somewhere else.
But it seems like a hassle and a lot of change in the nitty gritty of my code to change things around to account for all these extra columns.
**Idea**
In my mind it would be simpler if I added a separate table containing the following:
- UniqueUserIdentifier
- UniqueOrgIdentifier
- SoftDeleteDate
And then whenever my app ask for data the api checks the new table; "is this person soft deleted from this org?"
If true, they just block the request until they are restored if needed, or they will remain deleted until they are hard deleted within x hours of the soft deletion happening.
Instead of having to change many queries and logic all over the place.
**Additional information**
The API uses EFCore as an ORM to connect to the database, in case that would help with any other smart fixes regarding its feature set.
I have thought about creating custom savechanges logic, but couldn't come up with a good idea other than again adding a column to all the tables.
Please let me know if you need any more information.
**Update**
J.D. Told me about row-level security which made me look around. It seems very useful, and it gave me some more insight into what I could search for.
So I came across global query filters for EFCore which seems promising. It allows the context to filter on all queries and when you actually need to ignore this global filter, you can simply do it on a query by query basis.
And it allows for dependency injection if you need to use something for the global filter that is based on the user that is connected.
I created an answer based on this new information
It also turns out that what I really wanted was to deactivate the row until eventual activation or hard delete instead of soft delete. I didn't know the correct way to express myself.

Mikkel
(21 rep)
Feb 3, 2022, 01:49 PM
• Last activity: Nov 24, 2022, 04:53 AM
5
votes
1
answers
299
views
What's the difference between these three versions of TSQL snippet?
# Version 1 ``` DECLARE @key INTEGER = 33, @val INTEGER = 44; BEGIN TRANSACTION; INSERT dbo.t([key], val) SELECT @key, @val WHERE NOT EXISTS ( SELECT 1 FROM dbo.t WITH (UPDLOCK, SERIALIZABLE) WHERE [key] = @key ); IF @@ROWCOUNT = 0 BEGIN UPDATE dbo.t SET val = @val WHERE [key] = @key; END COMMIT TRA...
# Version 1
DECLARE @key INTEGER = 33, @val INTEGER = 44;
BEGIN TRANSACTION;
INSERT dbo.t([key], val)
SELECT @key, @val
WHERE NOT EXISTS
(
SELECT 1 FROM dbo.t WITH (UPDLOCK, SERIALIZABLE)
WHERE [key] = @key
);
IF @@ROWCOUNT = 0
BEGIN
UPDATE dbo.t SET val = @val WHERE [key] = @key;
END
COMMIT TRANSACTION;
# Version 2
DECLARE @key INTEGER = 33, @val INTEGER = 44;
BEGIN TRANSACTION;
INSERT dbo.t WITH (UPDLOCK, SERIALIZABLE) ([key], val)
SELECT @key, @val
WHERE NOT EXISTS
(
SELECT 1 FROM dbo.t
WHERE [key] = @key
);
IF @@ROWCOUNT = 0
BEGIN
UPDATE dbo.t SET val = @val WHERE [key] = @key;
END
COMMIT TRANSACTION;
# Version 3
DECLARE @key INTEGER = 33, @val INTEGER = 44;
BEGIN TRANSACTION;
INSERT dbo.t WITH (UPDLOCK, SERIALIZABLE) ([key], val)
SELECT @key, @val
WHERE NOT EXISTS
(
SELECT 1 FROM dbo.t WITH (UPDLOCK, SERIALIZABLE)
WHERE [key] = @key
);
IF @@ROWCOUNT = 0
BEGIN
UPDATE dbo.t SET val = @val WHERE [key] = @key;
END
COMMIT TRANSACTION;
I'm confused about the position for the hint. If the table in the INSERT
portion and the subquery is the same, does it still make a difference where to write the hint?
Fajela Tajkiya
(1239 rep)
Sep 3, 2022, 08:36 AM
• Last activity: Sep 3, 2022, 11:11 AM
3
votes
1
answers
139
views
Storing column for data that doesn't exist for majority of rows
I'm using mariadb. I have a single table that keeps track of `video_count` and `image_count` for a few different sites. I.e. type | enum('SITE1','SITE2','SITE3') video_count | int(11) image_count | int(11) `video_count` will exist for ALL sites but `image_count` only exists for `SITE3`. Also, SITE3...
I'm using mariadb.
I have a single table that keeps track of
video_count
and image_count
for a few different sites. I.e.
type | enum('SITE1','SITE2','SITE3')
video_count | int(11)
image_count | int(11)
video_count
will exist for ALL sites but image_count
only exists for SITE3
. Also, SITE3 will be a minority of rows.
So does it make sense to arrange the data this way on one table?
I'd like to query ALL rows...but is it a problem if it's redundant and expensive knowing that 90% of my rows will have 0
for image_count?
It'd be perfect if I could do something like SELECT * from TABLE BUT DON't GET video_count WHERE type != 'SITE3'"
Is that even a possible query in SQL? Or does it now make sense to create separate tables for 'SITE1...SITEN'? Ty!
user2402616
(211 rep)
May 27, 2022, 07:16 PM
• Last activity: May 27, 2022, 10:48 PM
0
votes
1
answers
290
views
Is it ok to have multiple files table for a different entity?
I have two entities A and B. A and B can have multiple files associated to them, each of those files can be a `document` or an `image`. Does it make sense to create `a_files` and `b_files` tables with the the following structure? `id | a_id | url | type(enum)` `id | b_id | url | type(enum)` I though...
I have two entities A and B.
A and B can have multiple files associated to them, each of those files can be a
document
or an image
.
Does it make sense to create a_files
and b_files
tables with the the following structure?
id | a_id | url | type(enum)
id | b_id | url | type(enum)
I thought about creating a generic files
table too and then reference it through a/b_files
, but it seems to me a kind of over-killing joins.
Federico Peralta
(103 rep)
Jan 24, 2022, 09:14 PM
• Last activity: Jan 25, 2022, 12:35 PM
12
votes
3
answers
13721
views
How to avoid a cyclic dependency (circular reference) between 3 tables?
I have 3 tables: - People - Post - Likes When I design the ER model it has a cyclic dependency: 1:N People -------- The logic is: - 1 people can have many posts. - 1 post has many likes. - 1 people can like many posts (created person cannot like his own post). How can I remove this kind of cyclic de...
I have 3 tables:
- People
- Post
- Likes
When I design the ER model it has a cyclic dependency:
1:N People -------- The logic is: - 1 people can have many posts. - 1 post has many likes. - 1 people can like many posts (created person cannot like his own post). How can I remove this kind of cyclic design? Or is my db design wrong?
Ragu
(221 rep)
Dec 17, 2015, 05:37 PM
• Last activity: Nov 17, 2021, 02:38 AM
0
votes
1
answers
157
views
What database & design would you use for legos
I'm trying to conceptualize an interesting problem I am designing for. Using lego's (the plastic construction toys) as an analogy seems to work well. If this is a fairly recognizable problem, then I'd appreciate any reference to related information. Given the scenario details below, what database wo...
I'm trying to conceptualize an interesting problem I am designing for. Using lego's (the plastic construction toys) as an analogy seems to work well. If this is a fairly recognizable problem, then I'd appreciate any reference to related information.
Given the scenario details below, what database would you choose, i.e. RDB/SQL or NoSQL or Graph or lucene or ?, and how might you considering modeling the design. Given:
1. ***"lego materials"*** or simply all of the individual type of lego pieces that may exist.
- Each individual piece has metadata / various characteristics such as size, color, bottom/top knobs & gaps, constraints, etc.
2. ***"builders catalog"*** of all of the possible number of building blocks/modules you could create using multiple pieces.
- metadata here might include descriptions, the manufacturers of the module, all of the pieces used to create the module, intended purpose, what knobs/gaps are able to be built on etc.
3. ***"final creations"***
- essentially all the various lego sets sold as products. The data will include generic terms describing the build, like "castle", "city", "airplane", etc. Then, the instructions which detail every module required to be built to build the final creation.
The solution needs to address scale and the ability to search across all of our objects and their relations to each other. I feel graph dbs may not be able to scale to this intended purpose. Further, the data will grow with every addition to the objects above.
Questions we'd query:
> (lego materials) show me all the pieces I have... or... Find a piece structured like this one (using meta data) but also
> all the places it has been used to build a module in our builders catalog
> (below), as well as final creations... or... find me all the pieces I can place on top of this piece
> (builders catalog) Similar composability type questions above as it relates to modules connecting to one another, and final creations. Also... find other modules using the similar lego pieces as this one
> (final creations) What other creations are a part of this set (like a Batman series)... or what other creations use similar modules, or similar pieces as this creation.
Michael Ramos
(103 rep)
Aug 21, 2021, 02:16 AM
• Last activity: Aug 21, 2021, 01:45 PM
Showing page 1 of 20 total questions