Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
150 views
Mysql Sharding. Application Layer vs Mysql Layer
I am going to create shards in mysql table and I am stuck at a dileman whether to go for application layer sharding or let MySQL handle it. I mean, wether I should be figuring out which shard to go and query at application layer or pass on the query to the MySQL driver and let it decide. I read abou...
I am going to create shards in mysql table and I am stuck at a dileman whether to go for application layer sharding or let MySQL handle it. I mean, wether I should be figuring out which shard to go and query at application layer or pass on the query to the MySQL driver and let it decide. I read about some pros and cons of both here but I am unable to come up with a conclusion. My personal opinion is to go for the MySQL driver handling it. Open for discussion on this. Please share your views/experiences.
Sankalp Mishra (101 rep)
Nov 5, 2019, 03:20 AM • Last activity: Jul 16, 2025, 11:03 AM
0 votes
1 answers
227 views
MySQL: Does the default value checked by the `CHECK` constraint?
Say I have the following schema, would the default value `https://sample.com` be checked by the `CHECK (Website LIKE "https://%")`? ``` CREATE TABLE Company ( AccountID varchar(25), Name varchar(20) NOT NULL, Website varchar(255) DEFAULT 'https://sample.com', Categories varchar(255) NOT NULL DEFAULT...
Say I have the following schema, would the default value https://sample.com be checked by the CHECK (Website LIKE "https://% ")?
CREATE TABLE Company (
	AccountID varchar(25),
	Name varchar(20)        NOT NULL,
    Website varchar(255)    DEFAULT 'https://sample.com ',
    Categories varchar(255) NOT NULL
                            DEFAULT '[]',
    PRIMARY KEY(AccountID),
    
    CHECK                   (CHAR_LENGTH(AccountID) BETWEEN 4 AND 25),
    CHECK                   (CHAR_LENGTH(Name) BETWEEN 2 AND 20),
    CHECK                   (Website LIKE "https://% ")
);
Rainning (131 rep)
Apr 21, 2021, 07:03 AM • Last activity: Jun 9, 2025, 03:03 AM
0 votes
2 answers
251 views
Best approach for a one-to-many list - how to store/acces some related count info?
I have a table with persons, and other with interactions, notes and tasks, all related to person(s). I need to have quick access to: - last interaction date. - number of interactions in the past 1,6, 12 months. - number of notes (Total) - number of tasks All this info are used for sorting highlighti...
I have a table with persons, and other with interactions, notes and tasks, all related to person(s). I need to have quick access to: - last interaction date. - number of interactions in the past 1,6, 12 months. - number of notes (Total) - number of tasks All this info are used for sorting highlighting in a list/table. When user select a person, all related info can be requested from DB, question is how to get this data for all persons from db? **Version 1**: Sub-query and add the info for each person in table. Could be intensive when tables grows. **Version 2**: store this info in person table, update them when new info is created/modified. This violates the 'single source of true' principle and could be prone to errors. Any ideas?
SharpBCD (103 rep)
Mar 20, 2021, 06:53 PM • Last activity: May 24, 2025, 03:04 AM
-1 votes
1 answers
57 views
What are the best practices of a banking relational database?
**I don't have any particular problem in mind, I am just looking for the best practices.** I am creating a bank database. Nothing too important or involving real money, nevertheless, I think it would be cool to have 'banking-grade security', besides, it is a learning opportunity to do it, as if it w...
**I don't have any particular problem in mind, I am just looking for the best practices.** I am creating a bank database. Nothing too important or involving real money, nevertheless, I think it would be cool to have 'banking-grade security', besides, it is a learning opportunity to do it, as if it was real money. I was wondering what are the best practices used by the banking industry, in order to have a fool-proof reliable performant and concurrent system for writing financial transactions in a database. Let's consider the simplest case: Users have one single account, and they can do transactions with other users. **If anyone has an article, youtube video, blog, etc,** detailing best practices for databases in the banking industry: isolation levels, how to handle concurrent reads/writes, the type of IDs (snowflakes, sequential, etc), how to design the tables, how to handle replication, how to handle backups, how to make it performant, how to scale it, etc. Here is how I was doing it, with two tables in a postgres SQL database: An user table, and a ledger table. These items was how I was imagining it being secure: 1. Locking the database with password. For example, postgres would mean creating an user with a password, where the database belongs to. 2. All monetary values being an integer, where it is possible to display the least denomination. For example, if USD, then, USD 0.50 would be Integer(50). 3. Concurrent connections. Allow multiple connections into the database for performance. 4. ACID transactions, with serializable isolation level, on all reads/writes of the ledger table, to prevent phantom reads of user balance, etc. (I don't know if it hurts performance). 5. Withdraws or deposits can be thought of money going to or coming from a special user (say, user id 0) in the ledger table. So, if user 123 makes a deposit, money will flow from 0 to 123. If user 777 makes a withdraw, money will flow from 777 to 0. User 0 can be thought of the bank itself, and tracks all of the bank's debts. **That is what I thought. However, I don't know if I missed any security practice, or if there exists a better way of doing things.**
User
----------
user_id,
username,
email,
Ledger
----------
id,                # Unique ID for the transaction.
from_user_id,      # User where the money is coming from.
to_user_id,        # User where the money is going to.
from_balance_id,   # Balance of from_user after transaction.
to_balance_id,     # Balance of to_user after transaction.
amount,            # The amount to be sent, an integer.
date,              # Date and time of when it happened.
description,       # Description.
That's the postgres schema I am using:
CREATE TABLE ledger (
    id bigint primary key,
    from_user_id bigint not null,
    to_user_id bigint not null,
    from_balance bigint not null,
    to_balance bigint not null,
    value bigint not null,
    transfer_date int not null,
    description varchar(256),
    FOREIGN KEY (from_client_id) REFERENCES "user"(id),
    FOREIGN KEY (to_client_id) REFERENCES "user"(id)
);
phy0072 (1 rep)
May 9, 2025, 05:05 PM • Last activity: May 10, 2025, 08:38 AM
1 votes
1 answers
1190 views
How do I set up row level security based upon relationship in another table in PostgreSQL?
I'm working on a social media app with PostgreSQL where I need the following features: - Users must be able to search for other users based upon their display names (display names are public) - Users should not be able to see not connected users' email addresses - Users with confirmed connections mu...
I'm working on a social media app with PostgreSQL where I need the following features: - Users must be able to search for other users based upon their display names (display names are public) - Users should not be able to see not connected users' email addresses - Users with confirmed connections must be able to view the email address of any connected users. Right now, the "protected" (not sure if that's the right word) is just limited to email addresses, but I can see in the future wanting to do something similar with other columns that might be added. Here is how the tables are currently set up. I'm really not great at database administration, so I could have this all totally wrong. Users | table_schema | table_name | column_name | udt_name | | ------------ | ---------- | ---------------------- | ----------- | | public | users | id | uuid | | public | users | email | text | | public | users | display_name | text | | public | users | last_post_published_at | timestamptz | User connections | table_schema | table_name | column_name | udt_name | | ------------ | ---------------- | ---------------- | ----------- | | public | user_connections | id | int8 | | public | user_connections | created_at | timestamptz | | public | user_connections | user_a_id | uuid | | public | user_connections | user_b_id | uuid | | public | user_connections | user_a_confirmed | bool | | public | user_connections | user_b_confirmed | bool | What's the best way to achieve the features and security set that I'm going for? Essentially what *thought* I needed was row level security based upon some sort of relationship established in a foreign key, but now I'm suspecting I need to look at it from a completely different angle.
aduensing (11 rep)
Feb 26, 2022, 05:18 PM • Last activity: May 4, 2025, 01:03 PM
1 votes
2 answers
2050 views
Database Design for Pokémon Review
I am trying to learn more about Relational Databases. So, I did a pokemon database gathering the stats available on the Smogon website. I have done some research thus far and would like to get some insight on my current Database Design and Normalization. What are your thoughts and suggestions on my...
I am trying to learn more about Relational Databases. So, I did a pokemon database gathering the stats available on the Smogon website. I have done some research thus far and would like to get some insight on my current Database Design and Normalization. What are your thoughts and suggestions on my current Database Design ?

Database Design

Here is my database model, which I managed to do via the informations below.

mcd

Database Design V2

I updated my database design with the following modifications: - Removed table : *« SPREADS »* - Removed table : *« CHECKS_COUNTERS »* - Removed table : *« LEADS »* - Modification of the entity *« POKEMON_MOVESET_SPREADS »* to integrate the data from the table *« SPREADS »*. - Modification of the entity *« POKEMON_MOVESET_CHECKS_COUNTERS »* to integrate the data from the table *« CHECKS_COUNTERS »*. - Modification of the entity *« TIERS_ASSOCIATION »* to integrate the data from the *« LEADS »* table. - Modification of relations *« POKEMON_MOVESET_* »* (0,N) in (1,N) - Addition of name alias to the *« TIERS »* and *« GENERATION »* table.

MCDV2

I think there is a problem between *« TIERS_ASSOCIATION »* and *« POKEMON_MOVESET »*. Because, I think the table *« POKEMON_MOVESET »* should not exist but how to make the association with *« TIERS_ASSOCIATION »* and the other entities (*« POKEMON_MOVESET_\* »*)

Informations:

First, in order to obtain the data, it is necessary to choose several values: - *Date* : The file date *(1 file per month for each category)* - *Level* : Player level *(0, 1500, 1630, ...)* - *Tier* : The tier in which the player plays *(UU, OU, UBER, ...)* - *Generation* : The generation of pokemon *(1Gen, ..., 8Gen)* There are therefore **4 essential steps before having the data**. Once all the previous data has been selected, it is possible to obtain the information in the table below: - Only the "Usage %" data is interesting *(tas_percent in my DB design)*
+ ---- + ------------------ + --------- + ------- + ------- + ------- + ------- + 
 | Rank | Pokemon            | Usage %   | Raw     | %       | Real    | %       | 
 + ---- + ------------------ + --------- + ------- + ------- + ------- + ------- + 
 | 1    | Dragapult          | 40.61997% | 1335515 | 37.505% | 1003505 | 36.128% | 
 | 2    | Corviknight        | 37.54771% | 995958  | 27.969% | 785209  | 28.269% | 
 | 3    | Clefable           | 31.23795% | 661575  | 18.579% | 530840  | 19.111% | 
 | 4    | Seismitoad         | 30.49124% | 716229  | 20.113% | 611845  | 22.028% | 
 | 5    | Excadrill          | 25.75186% | 930790  | 26.139% | 732315  | 26.365% |
But we can also get the moveset, like this:
+----------------------------------------+ 
 | Dragapult                              | 
 +----------------------------------------+----------------------------------------+
 | Raw count: 1424591                     | Moves                                  |
 | Avg. weight: 0.0449491556183           | Draco Meteor 53.768%                   |
 | Viability Ceiling: 88                  | Shadow Ball 49.102%                    |
 +----------------------------------------+ U-turn 42.770%                         |
 | Abilities                              | Dragon Darts 35.307%                   |
 | Infiltrator 86.702%                    | Fire Blast 32.237%                     |
 | Clear Body 10.261%                     | Phantom Force 26.490%                  |
 | Cursed Body  3.037%                    | Dragon Dance 25.111%                   |
 +----------------------------------------+ Thunderbolt 24.562%                    |
 | Items                                  | Substitute 22.935%                     |
 | Choice Specs 36.130%                   | Flamethrower 16.254%                   |
 | Leftovers 25.054%                      | Hex 16.005%                            |
 | Life Orb 16.426%                       | Will-O-Wisp 15.322%                    |
 | Choice Scarf  8.741%                   | Sucker Punch 13.483%                   |
 | Choice Band  6.208%                    | Steel Wing  6.940%                     |
 | Expert Belt  2.294%                    | Other 19.713%                          |
 | Light Clay  0.719%                     |----------------------------------------+ 
 | Other  4.428%                          | Teammates                              |
 +----------------------------------------+ Obstagoon +3.156%                      |
 | Spreads                                | Cloyster +2.606%                       |
 | Timid:0/0/0/252/4/252 34.595%          | Bisharp +1.901%                        |
 | Adamant:0/252/0/0/4/252 12.020%        | Lucario +1.416%                        |
 | Modest:0/0/0/252/4/252 11.202%         | Grimmsnarl +1.413%                     | 
 | Jolly:0/252/0/0/4/252  8.562%          | Shuckle +1.023%                        |
 | Timid:96/0/16/184/0/212  2.865%        | Mew +0.958%                            |
 | Timid:0/0/4/252/0/252  2.324%          | Excadrill +0.515%                      |
 | Other 28.433%                          |                                        |
 +----------------------------------------+----------------------------------------+ 
 | Checks and Counters                                                             | 
 | Grimmsnarl 72.094 (77.31±1.30) (10.3% KOed / 67.0% switched out)                | 
 | Bisharp 70.970 (76.61±1.41) (27.2% KOed / 49.5% switched out)                   | 
 | Sylveon 70.531 (74.48±0.99) (15.0% KOed / 59.5% switched out)                   |           
 +---------------------------------------------------------------------------------+

Moveset

The moveset depends on the 4 essential steps mentioned above. It is made up of several pieces of information: - *Abilities* : There are several abilities available with a different percentage. - *Items* : There are several items available with a different percentage. - *Moves* : There are several attacks moves available with a different percentage. - *Teammates* : A pokemon can be used with one or more other pokemon. - *Checks and Counters (Cac)* : A pokemon can have 0 or more pokemon CaC. - *Spreads* : A little more complicated but a pokemon has a preferential nature for combat (Timid, Adamant, ...). In addition, it is necessary to distribute skill points (PV, Atk, Def, Atk Spe, Def Spe, Vitesse). The percentage represents people's preferred choice. Like this : [nature] [pv/atk/def/atk spe/def spe/vit] [percentage of use]
+----------------------------------------+----------------------------------------+
 | Spreads                                |  Other Example :                       |
 | Timid:0/0/0/252/4/252 34.595%          |  Impish:252/0/76/0/180/0 10.527%       |
 | Adamant:0/252/0/0/4/252 12.020%        |  Careful:252/0/4/0/252/0  7.727%       |
 | Modest:0/0/0/252/4/252 11.202%         |  Impish:252/0/252/0/4/0  4.331%        |
 | Jolly:0/252/0/0/4/252  8.562%          |  Careful:248/0/8/0/252/0  4.212%       |
 | Timid:96/0/16/184/0/212  2.865%        |  Impish:252/4/252/0/0/0  3.760%        |
 | Timid:0/0/4/252/0/252  2.324%          |  Careful:252/4/0/0/252/0  3.324%       |
 | Other 28.433%                          |  Other 66.118%                         |
 +----------------------------------------+----------------------------------------+
`
Pilipe (11 rep)
Feb 19, 2020, 01:39 PM • Last activity: Apr 10, 2025, 11:04 PM
2 votes
2 answers
729 views
Is there a best practice for resolving dependencies of SQL objects for build during development?
I have a greenfield project involving a SQL database that is under development. There is no production data involved yet. There is a rather long list of specifications which I expect can be implemented with a collection of SQL tables, views, functions, and procedures. The overview of the inner loop...
I have a greenfield project involving a SQL database that is under development. There is no production data involved yet. There is a rather long list of specifications which I expect can be implemented with a collection of SQL tables, views, functions, and procedures. The overview of the inner loop of the development process is basically the following: 1. drop and recreate the database 2. create the tables, views, etc 3. import test data 4. run tests 5. make changes to code 6. repeat beginning at (1) The development has already involved a number of refactorings. The results of these refactorings are good. However, they usually precipitate a period of [dependency hell](https://en.wikipedia.org/wiki/Dependency_hell#:~:text=Dependency%20hell%20is%20a%20colloquial,versions%20of%20other%20software%20packages.) in step (2) during which I have to manually re-resolve the order of the files defining the objects are executed. That file order is currently hard-coded as a list of file names in a script. Each file has exactly one object defined in it. The process of manually resolving dependencies works, but is labor-intensive. For C and .Net projects in the past I have used [make](https://en.wikipedia.org/wiki/Make_(software)#:~:text=In%20software%20development%2C%20Make%20is,to%20derive%20the%20target%20program.) and [MSBuild](https://en.wikipedia.org/wiki/MSBuild) to manage dependencies, respectively. I can almost imagine how to manage SQL dependencies with those tools, but I haven't seen it done and I'd rather not re-invent the wheel if there is a method that is already proven. A search of stackoverflow for [makefile](https://stackoverflow.com/questions/tagged/makefile+sql) and [MSBuild](https://stackoverflow.com/questions/tagged/msbuild+sql) with SQL didn't produce any results that were obviously on-point. Is there a best practice for resolving dependencies of SQL objects for build during development?
alx9r (449 rep)
Feb 25, 2023, 05:59 PM • Last activity: Apr 5, 2025, 03:47 PM
2 votes
1 answers
1561 views
Adding data to a DB as migrations
**Some Background** We currently have a *PostgreSQL* DB that is "read-only", i.e. no user is allowed to write to it and all insertions are done in a controlled manner. For a while, these were done by manually adding this data or creating a copy of the DB locally and using `pgdump` to put it into a p...
**Some Background** We currently have a *PostgreSQL* DB that is "read-only", i.e. no user is allowed to write to it and all insertions are done in a controlled manner. For a while, these were done by manually adding this data or creating a copy of the DB locally and using pgdump to put it into a production environment. There were obvious limitations with that and since there was the need for versioning the DB too (i.e. an addition to the contents or change of structure means we need to update the version number) we decided to use [flyway](https://flywaydb.org/) . This was good since we would get a versioning "for free" by using the migrations. We were not only using Flyway to modify the structure of tables and DB but we've been using it for inserting new data too. **The Problem** The migration files are getting quite big and if we need to apply the migrations or something similar it is becoming quite a time-intensive process, especially if we apply from scratch. Since DB management/engineering is not my forte I've decided to reach out: **The Question** Are there best practices or a way to better handle this situation? I.e. structural changes AS WELL as data addition. My best guess was to use the migrations files only for structural changes and have the data into files (CSV or other) that get added programmatically after the migrations run. But this is ok for the first time, after that the file would be obsolete and subsequent files would be used for new data that would fit possible structural changes...
fditz (121 rep)
Jan 13, 2022, 12:20 PM • Last activity: Feb 8, 2025, 05:05 PM
3 votes
2 answers
573 views
Why is running database integrity checks in parallel beneficial if you are following best practice on Enterprise Edition?
On Enterprise Edition and Development edition, database integrity checks (e.g. `DBCC CHECKDB`) can run in parallel. According to best practice, backups should be tested regularly. So if I'm following best practice, I would be regularly restoring my production Enterprise Edition box's backups on to a...
On Enterprise Edition and Development edition, database integrity checks (e.g. DBCC CHECKDB) can run in parallel. According to best practice, backups should be tested regularly. So if I'm following best practice, I would be regularly restoring my production Enterprise Edition box's backups on to a non-production Development edition box and running CHECKDB there. This means that **I would never actually be running database integrity checks on my production Enterprise Edition box**. So, why is running database integrity checks in parallel beneficial if you are following best practice on Enterprise Edition? The exception to this is system databases, but they're so tiny that running CHECKDB in parallel is unlikely to matter.
J. Mini (1225 rep)
Jan 15, 2025, 07:15 AM • Last activity: Feb 6, 2025, 11:00 AM
2 votes
1 answers
501 views
Why is it considered best practice to partition columnstore tables?
Prior to SQL Server 2016, [partitioning columnstore indexes][1] was considered pretty much mandatory because the locks taken on them during inserts/updates/deletes were extreme. However, as of SQL Server 2016, the locks taken on columnstore indexes are [much less intense](https://web.archive.org/web...
Prior to SQL Server 2016, partitioning columnstore indexes was considered pretty much mandatory because the locks taken on them during inserts/updates/deletes were extreme. However, as of SQL Server 2016, the locks taken on columnstore indexes are [much less intense](https://web.archive.org/web/20160603180253/http://www.nikoport.com:80/2015/09/22/columnstore-indexes-part-67-clustered-columstore-isolation-levels-transactional-locking). Why, then, is it still considered best practice to partition them? I am aware of [the trick](https://web.archive.org/web/20170224141946/http://www.nikoport.com/2014/12/02/clustered-columnstore-indexes-part-45-multi-dimensional-clustering/) where you can improve alignment by switching out a partition, building a clustered rowstore index, replacing that index with columnstore, and then switching the partition back in. However, that's just a mild performance optimisation that rarely survives a rebuild and was largely made redundant by [SQL Server 2022's ordering features](https://learn.microsoft.com/en-us/sql/relational-databases/indexes/ordered-columnstore-indexes?view=sql-server-ver16) .
J. Mini (1225 rep)
Jan 20, 2025, 07:29 AM • Last activity: Jan 20, 2025, 01:30 PM
0 votes
1 answers
483 views
is it safe to create mysql database by just creating its directory?
**context**: I want a separate encrypted filesystem per database (to have a separate key) on MySQL 5.7 but if I mount it before I execute `CREATE DATABASE foo` it fails with error `database exists`, and mouting the filesystem after is a bit risky to me. So I noticed I can "create" a database in MySQ...
**context**: I want a separate encrypted filesystem per database (to have a separate key) on MySQL 5.7 but if I mount it before I execute CREATE DATABASE foo it fails with error database exists, and mouting the filesystem after is a bit risky to me. So I noticed I can "create" a database in MySQL but just creating a folder inside the mysql datadir and creating tables after causes no trouble. I know this is dirty but this way it simplifies the integration with the creation of the filesystem underneath (as it appeared as the folder when lie the tables). Does creating database this way is safe?
Baptiste Mille-Mathias (130 rep)
Apr 20, 2021, 08:22 AM • Last activity: Jan 6, 2025, 06:03 AM
2 votes
2 answers
1518 views
How to delete foreign key with lots of references
Table B have several thousand records that references table A with a foreign key. The application sends a request to delete the row from table A. It's important that the synchronous operation would be immediate and won't be at risk for timeouts. If a cascade delete is used, and thus several thousand...
Table B have several thousand records that references table A with a foreign key. The application sends a request to delete the row from table A. It's important that the synchronous operation would be immediate and won't be at risk for timeouts. If a cascade delete is used, and thus several thousands of records will be deleted as well, could it cause the deletion to take a long time? (not immediate). Assuming that it could lead to a long deletion operation, what could be done as an alternative? I've though perhaps to delete the record as deleted and to perform the actual deletion in some background process (where the referencing rows would be deleted before the parent), but this approach feels rather error prone (since all of the existing queries will need to reference this new markedAsDeleted column. Any ideas?
CuriousGeorge (21 rep)
Jun 30, 2019, 03:06 PM • Last activity: Jan 5, 2025, 11:04 PM
0 votes
0 answers
26 views
Best practice for distinguishing periods from timestamps in database tables
I am working with weather data. Historic data which was measured and then documented and on the other hand predicted data. In both cases it is possible for the data provider to either hand over data for periods or timestamps. Example: 1. Temperature is measured once every 10 minutes and is provided...
I am working with weather data. Historic data which was measured and then documented and on the other hand predicted data. In both cases it is possible for the data provider to either hand over data for periods or timestamps. Example: 1. Temperature is measured once every 10 minutes and is provided in °C 2. Global radiation is continuously measured for an hour and the sum is provided in kwh per m² The first example is a point in time measurement with a timestamp and the second is a continuous measurement with a period start and a period end. I would store this in a table like this: - Fk_Phenomenon_Id = 12 --> Temperature - Fk_Phenomenon_Id = 13 --> Global Radiation - Fk_Unit_Id = 3 --> °C - Fk_Unit_Id = 5 --> kwh per m² | ID | Fk_Phenomenon_Id | Value | Fk_Unit_Id | From | To | |----|----|----|----|----|-----| | 1 | 12 | 20 | 3 | 01.01.2024 10:00:00 | 01.01.2024 10:00:00 | | 2 | 13 | 0.15 | 5 | 01.01.2024 10:00:00 | 01.01.2024 11:00:00 | As you can see, the From and To columns are the same if it is a point in time. If it is a period they differ. There is however, also the possibility to use just one timestamp column and add a foreign key to a table which describes the period type. This table would be referenced by Fk_PeriodType_Id and it would contain entries like "leading hour", "trailing hour", "point in time", "trailing 10min" etc. Fk_PeriodType_Id is just one of many possibilities. I would like to have your advice on what is a good design for a table storing the described weather data. I am explicitly not asking for the best technical solution considering normalization and so on. I am focused on usability for automatisms and humans. Humans need to be able to read the table and filter it if needed and a programm accessing the database needs to be able to work with the data with reasonable performance, while the table still stays easily comprehensible for developers. The most helpful to me will be reports of experience with similar business needs. I have a solution in mind and am interested in what can go wrong when I apply it.
Merlin Nestler (159 rep)
Sep 30, 2024, 09:51 AM • Last activity: Sep 30, 2024, 11:07 AM
0 votes
1 answers
806 views
SQL: Entity belongs (many-to-one) one of two different entities
Let's say I have `organization` and `person` and they have a `one-to-many` relationship with `post`. `post` could only be owned by either an organization or user (person). What I'm doing right now is making the foreign key nullable and adding a `check constraint` to ensure only one of `organization_...
Let's say I have organization and person and they have a one-to-many relationship with post. post could only be owned by either an organization or user (person). What I'm doing right now is making the foreign key nullable and adding a check constraint to ensure only one of organization_id and person_id has a value and the other one is null. Is this considered a valid SQL design (best practise)? What if I had more than two entities that could be the owner of a certain post? enter image description here
Yousif (27 rep)
Oct 8, 2021, 09:20 PM • Last activity: Jul 31, 2024, 12:43 PM
10 votes
6 answers
11910 views
Parallelism Best Practices
What are the best practices with setting parallelism in general? I know that SQL Server defaults to `0` to use all available processors, but in what instance would you want to change this default behavior? I remember reading somewhere (I'll have to look for this article) that for OLTP workloads you...
What are the best practices with setting parallelism in general? I know that SQL Server defaults to 0 to use all available processors, but in what instance would you want to change this default behavior? I remember reading somewhere (I'll have to look for this article) that for OLTP workloads you should turn off parallelism (set maxdop to 1). I don't think I completely understand **why** you would do this. When would you keep maxdop up to SQL Server (0)? When would you turn off parallelism (1)? When would you explicitly state the maxdop to a particular number of processors? What causes parallelism?
anon
Sep 29, 2011, 01:13 AM • Last activity: Jul 24, 2024, 04:40 PM
0 votes
3 answers
17661 views
Best Practice for Oracle SGA and PGA Memory Components Size
I have a Server that has 128GB RAM. I have read here and there that the size of SGA + PGA can be 80% of the RAM. I wonder if there is any best practice for this configuration that could allow Oracle Database to utilize hardware even better? There is also this consumption that this server will be use...
I have a Server that has 128GB RAM. I have read here and there that the size of SGA + PGA can be 80% of the RAM. I wonder if there is any best practice for this configuration that could allow Oracle Database to utilize hardware even better? There is also this consumption that this server will be used only to serve the Oracle database.
Ali Hamidi (1 rep)
Jan 5, 2020, 12:02 PM • Last activity: Jun 11, 2024, 07:25 AM
178 votes
20 answers
63529 views
Should developers be able to query production databases?
Should developers be given permission to query (`SELECT` / read only) production databases? The previous place I worked, the development team had the `db_datareader` role; where I work now the development team can't even connect to the production instance. One of the test instances is a copy of prod...
Should developers be given permission to query (SELECT / read only) production databases? The previous place I worked, the development team had the db_datareader role; where I work now the development team can't even connect to the production instance. One of the test instances is a copy of production restored from a production backup once a week, so there aren't any problems with developers actually seeing the data. What good reasons are there for not allowing developers to query production (except for simply not wanting them to have access to read sensitive data)?
Tom Hunter (2179 rep)
Jan 6, 2012, 03:57 PM • Last activity: May 28, 2024, 11:57 AM
28 votes
7 answers
8132 views
Is table aliasing a bad practice?
I remember learning to do this in a DBMS course for Master of Information Services students. To save yourself some typing, you can type: SELECT t1.id, t2.stuff FROM someTable t1 INNER JOIN otherTable t2 ON t1.id=t2.id ; But... Why is this acceptable in stored procedures and such? It seems like all i...
I remember learning to do this in a DBMS course for Master of Information Services students. To save yourself some typing, you can type: SELECT t1.id, t2.stuff FROM someTable t1 INNER JOIN otherTable t2 ON t1.id=t2.id ; But... Why is this acceptable in stored procedures and such? It seems like all it does is harm the readability of the statement while saving an extremely minor amount of time. Is there any functional or logical reason to do this? It seems to add ambiguity rather than remove it; the only acceptable reason I can see for using this format is if you were adding a semantically meaningful alias -- for example, FROM someTable idsTable -- when the table name isn't descriptive enough. Is table aliasing a bad practice or is this just a misuse of a helpful system?
Zelda (2103 rep)
Sep 20, 2011, 02:17 PM • Last activity: May 15, 2024, 08:04 AM
-1 votes
1 answers
94 views
MySQL design and best design practices
I am designing my first database for a mobile application, which is also planned to be developed into a web application. The photo shows the part responsible for user authentication and information about him and his current session, etc. I would like to hear from someone with experience what they th...
I am designing my first database for a mobile application, which is also planned to be developed into a web application. The photo shows the part responsible for user authentication and information about him and his current session, etc. I would like to hear from someone with experience what they think about this project, what it is worth changing and why, what practices to follow to make the database effective and safe, I want to understand if I am thinking in the right direction. Thank you in advance for any comments. I would be grateful for your understanding. I don't know if it matters, but the potential number of users is about 30.000, in the best case about 200.000. enter image description here
GetReady (1 rep)
Feb 20, 2024, 12:56 PM • Last activity: Feb 20, 2024, 01:07 PM
76 votes
4 answers
61021 views
Function Performance
Coming from a MySQL background, where stored procedure [performance (older article)][1] and [usability][2] are questionable, I am evaluating PostgreSQL for a new product for my company. One of the things I would like to do is move some of the [application logic][3] into stored procedures, so I'm her...
Coming from a MySQL background, where stored procedure performance (older article) and usability are questionable, I am evaluating PostgreSQL for a new product for my company. One of the things I would like to do is move some of the application logic into stored procedures, so I'm here asking for DOs and DON'Ts (best practices) on using functions in PostgreSQL (9.0), specifically regarding performance pitfalls.
Derek Downey (23568 rep)
Nov 18, 2011, 06:47 PM • Last activity: Feb 16, 2024, 04:01 AM
Showing page 1 of 20 total questions