Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
2
votes
1
answers
1454
views
How to model a database for prices that vary depending on their categories and parameters?
My first post on DBA! Apologies in advance if I made any mistake. Before jumping into the schema and tables, I would like to share what I am trying to achieve first. I am working on sort of a courier application, where I have some `categories` and each category has a pre-defined `price`. But determi...
My first post on DBA! Apologies in advance if I made any mistake.
Before jumping into the schema and tables, I would like to share what I am trying to achieve first. I am working on sort of a courier application, where I have some
categories
and each category has a pre-defined price
.
But determining the price is a bit ugly (absence of symmetry and patterns; at least, I can't seem to find any) . I will give you an example:
Consider the following categories: Document, Heavy Document, Laptop, Carton, Heavy Carton.
1) **Document:** It's for the lighter documents, which are below 0.5kg. The price is 20$, fixed.
*[price stored in the prices table: 20.00]*
> e.g. For an item of 300g, the price will be 20$.
2) **Heavy Document:** This is for the documents that are over 0.5kg. Unlike the Document category, it doesn't have a fixed price! Rather, it has a unit price: 10$ per kg, which will be applied to each kg exceeding 0.5kg.
*[price stored in the prices table: 10.00]*
> e.g. For an item of 2kg, the price will be 35$ (1.5g = 15$ + 0.5 = 20$)
3) **Laptop:** Straightforward, 100$. Nothing special about it, no constraint whatsoever.
*[price stored in the prices table: 100.00]*
> e.g. For an item of 2kg, the price will be 35$ (1.5g = 15$ + 0.5 = 20$)
4) **Carton:** Here comes an interesting one. Until now, there was only one dependency: weight
. But this one has an additional dependency: dimension
. This is somewhat similar to the Document category. For the cartons that are below 3 Cubic Feet(CF), the price is 80$ per CF. The difference between Document and Carton category is that the Document has a fixed price, whereas Carton has a Unit Price. But wait, there's more. There is an additional constraint: dimension-weight ratio. In this case, it is 7kg per CF
. And if the item's weight crosses the ratio, for each extra kg 5$ will be charged. It's so confusing, I know. An example might help:
[price stored in the prices table: 80.00]
> e.g. For a carton of 80kg and 2CF; the price will be 490$. Here is how:
First calculate the regular charge: 80$*2CF = 160$
Now let's figure out if it crosses **Ratio**: Since, 1 CF = 7kg, hence, 2CF = 14kg. But the item's weight is 80kg, so it *crosses the ratio (14kg)*
Since it crosses the ratio, for all the extra kgs (80-14 = 66kg), each kg will cost 5$: 66*5 = 330$. After adding it with regular charge: 330$+160$ = 490$.
5) **Heavy Carton:** This one is for the cartons having the dimension bigger than 3CF. The difference with Carton is the unit price. Heavy Carton is 60$ per CF.
[price stored in the prices table: 60.00]
> e.g. For a carton of 80kg and 5CF; the price will be 525$. Here is how:
First calculate the regular charge: 60$*5CF = 300$
Now let's figure out if it crosses **Ratio**: Since, 1 CF = 7kg, hence, 5CF = 35kg. But the item's weight is 80kg, so it *crosses the ratio (35kg)*
Since it crosses the ratio, for all the extra kgs (80-35 = 45kg), each kg will cost 5$: 45*5 = 225$. After adding it with regular charge: 300$+225$ = 325$.
If you've read this far, I think I have convinced you that the business structure is really complicated. Now let's take a look at my categories
schema:
+-------------------------+---------------------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------------+---------------------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| name | varchar(191) | NO | | NULL | |
| created_at | timestamp | YES | | NULL | |
| updated_at | timestamp | YES | | NULL | |
| dim_dependency | tinyint(1) | NO | | NULL | |
| weight_dependency | tinyint(1) | NO | | NULL | |
| distance_dependency | tinyint(1) | NO | | NULL | |
| dim_weight_ratio | varchar(191) | YES | | NULL | |
| constraint_value | decimal(8,2) | YES | | NULL | |
| constraint_on | enum('weight','dim') | YES | | NULL | |
| size | enum('short','regular','large') | YES | | regular | |
| over_ratio_price_per_kg | decimal(8,2) | YES | | NULL | |
| deleted_at | timestamp | YES | | NULL | |
+-------------------------+---------------------------------+------+-----+---------+----------------+
Also the schema of prices
table (it's a polymorphic table, hoping to create a subcategories
table someday):
+----------------+---------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------+---------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| amount | decimal(8,2) | NO | | NULL | |
| created_at | timestamp | YES | | NULL | |
| updated_at | timestamp | YES | | NULL | |
| priceable_type | varchar(191) | NO | MUL | NULL | |
| priceable_id | bigint(20) unsigned | NO | | NULL | |
| deleted_at | timestamp | YES | | NULL | |
+----------------+---------------------+------+-----+---------+----------------+
How can I improve this structure to keep things as dynamic and coherent as possible?
Eisenheim
(121 rep)
Aug 11, 2018, 09:37 AM
• Last activity: Aug 5, 2025, 08:06 AM
1
votes
1
answers
140
views
How should I design my databas for online voting web app?
Currently I'm trying to build web app for online voting where registered user can create their own Election. For the database, every time user create an election should the system create new required table for the election or I could just some tables to hold everything ? 1. case 1 (create required t...
Currently I'm trying to build web app for online voting where registered user can create their own Election.
For the database, every time user create an election should the system create new required table for the election or I could just some tables to hold everything ?
1. case 1 (create required table for each election ) :
- user_table (id,name,email,etc)
- election_table (user_id,
election_id,etc)
per election created by user above, below tables will be created in database with electionId is part of table name :
- voter_electionId_table (voter_id,name,email,etc)
- candidate_electionId_table (candidate_id,no,name)
- ballot_electionId_table(vote_id,candidate_id)
example expected result :
1. case 2 ( same tables for every election) :
2. user_table (id,name,email,etc)
3. election_table (user_id,
election_id,etc)
4. voter_table(voter_id,election_id,etc)
5. candidate_table (electionID,candidate_id,no,name,etc)
6. ballot_table(electionId,vote_id,candidate_id)
expected result :
which from above is better ? or none of them is good (If yes, please share your thought).


xedox_za
(11 rep)
Nov 29, 2023, 01:57 AM
• Last activity: Aug 3, 2025, 06:07 AM
1
votes
1
answers
516
views
What is the impact of DROP SCHEMA on concurrent read only workloads in PostgreSQL?
I have an ELT process that deals with a pretty small amount of data. I would like to have 1 schema called `prod` with the results of the previous ELT run and 1 schema called `dev` where I will perform the transformations of the current ELT run. Once the current ELT run is complete I would like to ma...
I have an ELT process that deals with a pretty small amount of data. I would like to have 1 schema called
prod
with the results of the previous ELT run and 1 schema called dev
where I will perform the transformations of the current ELT run. Once the current ELT run is complete I would like to make the switch by running something along the lines of:
start transaction;
drop schema prod;
alter schema dev rename to prod;
end transaction;
All queries issued outside of the ELT process are read only. How would MVCC handle this? What would happen to read only queries that were running when the drop schema
command is issued? What would happen to read only queries issued after the drop schema
command? What about after the end of the transaction?
foobar0100
(641 rep)
May 2, 2016, 03:19 AM
• Last activity: Jul 28, 2025, 04:05 PM
0
votes
1
answers
145
views
Storing revisions and relations between revised items in 1:N and N:N scenarios for audits
I have several of the following situations in a MySQL schema that I design for a custom CRM/ERP solution. 1. 1:N related tables `SO` and `SO_ITEMS` (SO is just one example I have several) that both need to have revisions. The data sets stored tend to be get pretty big for some of these tables and, a...
I have several of the following situations in a MySQL schema that I design for a custom CRM/ERP solution.
1. 1:N related tables
SO
and SO_ITEMS
(SO is just one example I have several) that both need to have revisions. The data sets stored tend to be get pretty big for some of these tables and, according to the business rules I gathered, there will be many changes that need auditing so copying all the child items for each revision of the parent will not scale very well I predict.
2. N:N related tables SO
and PO
tables (where I use a pivot table of course) that needs to be audited also so that changes from at least one way imply (or may imply) storing an updated revision of the data entry form the other table. (We need no worry about the child _items
data entries for each of these tables because although they may be related, I don't require auditing for now).
My strategy for now is to use a separate table suffixed with _revision
for each of the above tables and store the current revision number in the normal table and a revision number for each revision, at least for the parents.
So my question are:
* is my schema flawed in some major way that I don't predict for this kind of scenario? (I suppose there are many other applications that have solved this problem out there).
* should I make a direct relationship between x_revison
and y_revision
or use the revision number and query through the normal (current revision?) table? I want the variant that reduces complexity the most and I'm pretty sure querying through is simpler on most operations (I suspect there will not be a big disparity between SELECT/INSERT/UPDATE in normal operation, so I don't need to optimise for one type of DB operation). Am I missing something?
* is it common practice to copy-on-write the revisions or should I just lazily insert entries in the revisions table when a change appears only? Any possible pros/cons to these two options.
Byakkun
(1 rep)
May 11, 2016, 09:52 AM
• Last activity: Jul 20, 2025, 12:08 AM
0
votes
1
answers
158
views
SQL Schema vs Schema
As far as I can tell, the word “schema” is used in two different ways: - The schema is the structure of the database, including the tables and how they are related - The schema is a sort of namespace allowing you to create virtual collections of database objects There is an article at https://dataed...
As far as I can tell, the word “schema” is used in two different ways:
- The schema is the structure of the database, including the tables and how they are related
- The schema is a sort of namespace allowing you to create virtual collections of database objects
There is an article at https://dataedo.com/kb/data-glossary/what-is-database-schema which highlights the different meanings, but doesn’t appear to explain why they use the same name.
Are the different uses of the word in fact related in some subtle but very interesting way, so that one meaning naturally leads to the other?
Manngo
(3135 rep)
Nov 4, 2021, 10:36 PM
• Last activity: Jul 10, 2025, 03:03 AM
0
votes
1
answers
174
views
Creation Synonym among different db schemas
might seem a stupid question, but I'm pretty new to the db world. Is that a good practice to have two oracle schemas which have synonyms pointing to each other ? - like SCHEMA_A owns a SYNONYM_1 pointing to the TABLE_1 in the SCHEMA_B - and the SCHEMA_B owns a SYNONYM_2 pointing to the TABLE_2 in th...
might seem a stupid question, but I'm pretty new to the db world.
Is that a good practice to have two oracle schemas which have synonyms pointing to each other ?
- like SCHEMA_A owns a SYNONYM_1 pointing to the TABLE_1 in the SCHEMA_B
- and the SCHEMA_B owns a SYNONYM_2 pointing to the TABLE_2 in the SCHEMA_A
Apparently technically is possible, but is that a good practice ? Does not sound as a kind of circular reference if you compare this to the "Software modules" concept ?
Thanks in advance for your responses.
tetDev
(1 rep)
May 6, 2022, 01:10 PM
• Last activity: Jul 9, 2025, 08:05 AM
0
votes
2
answers
1584
views
How to virtually update parent table when child table gets updated
I have parent table let say `CAR`, and its multiple props/records in child table let say `CAR_PROPS`. Now `CAR` has `modified_date` column like `modified_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, Since `CAR` along with its all properties are treated as single ob...
I have parent table let say
CAR
, and its multiple props/records in child table let say CAR_PROPS
.
Now CAR
has modified_date
column like
modified_date
timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
Since CAR
along with its all properties are treated as single object, we are expecting any change on its props (CAR_PROPS
) alone should also update the parent table (CAR's
) modified_date
.
However this is not real since CAR
doest really updated hence modified_date
is not changed when any of its child record got updated.
To simplify i want sql column modified_date to be updated, though i dont have anything/anyother column to update in the row.
How to solve this problem?
NOTE: I dont want to pass the modified_date from server, just want go with DB timestamp.
Kanagavelu Sugumar
(87 rep)
Apr 24, 2018, 09:42 AM
• Last activity: Jul 5, 2025, 04:09 AM
0
votes
1
answers
213
views
Storing order of relation as JSON array
Let's assume I have Playlist, Song and playlist_song tables. Where's it's better to store song ordering data? In Playlist as JSON `songs_order = [song_id_1, song_id_3, song_id_2]` or better on `playlist_song.order`. I am more tempted by storing it on Playlist but would like some opinions about this....
Let's assume I have Playlist, Song and playlist_song tables.
Where's it's better to store song ordering data?
In Playlist as JSON
songs_order = [song_id_1, song_id_3, song_id_2]
or better on playlist_song.order
.
I am more tempted by storing it on Playlist but would like some opinions about this.
I prefer it because in second option there's another issue how to limit updates - https://softwareengineering.stackexchange.com/questions/304593/how-to-store-ordered-information-in-a-relational-database
Constraint is that Playlist will have no more than 1000 songs.
Damaon
(101 rep)
Feb 1, 2022, 10:33 AM
• Last activity: Jun 15, 2025, 01:00 AM
0
votes
1
answers
219
views
Survey DB Design. No user answers, only questions
The task is to SQL a bunch of surveys in order for the researchers to have examples for creating new questionnaires in the future (and insert those questionnaires into DB). So no information about the users, just the questions and answer options. The questions are very different but there are some t...
The task is to SQL a bunch of surveys in order for the researchers to have examples for creating new questionnaires in the future (and insert those questionnaires into DB). So no information about the users, just the questions and answer options.
The questions are very different but there are some types of answer options that work with many questions from many surveys`(like 'Do you have a cat?' - 'Y/N', 'Have you ever eaten oysters?' - 'yes/no'; or 'Scale from 1(very bad) to 5 (very good) how you are experienced in pumpkin soup cooking' -- '1-5', 'Scale from 1(not important) to 5 (very important) how it is important for you to have at least 1 pint of beer on St. Patrick's Day' -- '1-5')
Im concerned about the link between 'SURVEY_QUESTIONS' and 'ANSWERS_OPTION'. Is it enough to put only Question_typeID as PK in SURVEY_QUESTIONS?
And I would appreciate any feedback on my db schema

g84003
(1 rep)
Dec 10, 2021, 10:45 AM
• Last activity: Jun 11, 2025, 07:02 AM
0
votes
1
answers
227
views
MySQL: Does the default value checked by the `CHECK` constraint?
Say I have the following schema, would the default value `https://sample.com` be checked by the `CHECK (Website LIKE "https://%")`? ``` CREATE TABLE Company ( AccountID varchar(25), Name varchar(20) NOT NULL, Website varchar(255) DEFAULT 'https://sample.com', Categories varchar(255) NOT NULL DEFAULT...
Say I have the following schema, would the default value
https://sample.com
be checked by the CHECK (Website LIKE "https://% ")
?
CREATE TABLE Company (
AccountID varchar(25),
Name varchar(20) NOT NULL,
Website varchar(255) DEFAULT 'https://sample.com ',
Categories varchar(255) NOT NULL
DEFAULT '[]',
PRIMARY KEY(AccountID),
CHECK (CHAR_LENGTH(AccountID) BETWEEN 4 AND 25),
CHECK (CHAR_LENGTH(Name) BETWEEN 2 AND 20),
CHECK (Website LIKE "https://% ")
);
Rainning
(131 rep)
Apr 21, 2021, 07:03 AM
• Last activity: Jun 9, 2025, 03:03 AM
0
votes
2
answers
247
views
Storing sports teams with multiple alternative names
I am designing a MySQL database to store some football match results and statistics in. The data for populating the DB will be scraped from multiple sources and the names of the teams will be slightly different on each website. (Some have old names used before the team was rebranded and changed its...
I am designing a MySQL database to store some football match results and statistics in.
The data for populating the DB will be scraped from multiple sources and the names of the teams will be slightly different on each website. (Some have old names used before the team was rebranded and changed its name completely; some will have abbreviations such as United -> Utd; some will have FC in the name, some won't; other sources will have the names in a language different than English.)
I'm wondering what would be the best way to store that data.
My thinking is to use the TeamID in the main table and have an extra table as a dictionary for translating multiple variants of the team name into the ID. Of course it would be quite a hassle to populate such a "dictionary".
Example
TeamID | TeamName
-----------------------------
1 | Manchester United
1 | Man Utd
1 | MUFC
2 | PSG
2 | Paris Saint-Germain
This is actually a big problem with South American clubs, where one club can have five completely different names in various sources and it's not as easy as shortening United to Utd or abbreviating the name.
So I'll have a script scrape the name of a team, find it in the "dictionary" table, then write all the statistics and info into the main table using just the TeamID. That should also save some memory.
Please let me know if that's a good solution and if you have some different approaches for me.
user1822227
(1 rep)
Apr 29, 2021, 02:52 AM
• Last activity: Jun 1, 2025, 02:01 AM
0
votes
1
answers
268
views
Doctor CRM database design
[![enter image description here][1]][1] [1]: https://i.sstatic.net/3PEaM.png I am designing a doctor CRM system. Now I need to have a list of patients in the doctor panel. what I do is I find all the bookings which are related to doctors and extract user data from there. Also, doctors need to add pa...

niloufar padkan
(23 rep)
Aug 22, 2023, 05:45 PM
• Last activity: May 25, 2025, 09:07 PM
1
votes
1
answers
258
views
How can I simplify the process of importing only changed data?
## Background Two years ago, I asked [a question about how to model a food's relationship with its nutrients][1]. Today, that design has evolved. The biggest change is that whenever a food is updated, we insert a new row into the `food_versions` table. We do not delete or update food versions. This...
## Background
Two years ago, I asked a question about how to model a food's relationship with its nutrients . Today, that design has evolved. The biggest change is that whenever a food is updated, we insert a new row into the
food_versions
table. We do not delete or update food versions.
This design was influenced by the following answer: https://dba.stackexchange.com/a/278893/240214 . I've since learnt this is based on a Type 2 'slowly changing dimension' .
### Additional information
A food version's nutrients (e.g. fat), allergens (e.g. milk), and diets (e.g. vegetarian) are stored in seperate tables and can also change over time. For example, a new food version that has a change to ingredients will often need new nutrient records also.
## The relevant tables
CREATE TABLE "foods" (
"id" uuid DEFAULT gen_random_uuid() PRIMARY KEY, -- Surrogate key
"code" text NOT NULL, -- Natural key
"organisation_id" uuid NOT NULL REFERENCES "manufacturer"("id") ON DELETE CASCADE,
UNIQUE("code", "organisation_id") -- One natural key per manufacturer
);
CREATE TABLE "food_versions" (
"food_id" uuid REFERENCES "foods"("id") ON DELETE CASCADE,
"created_time" TIMESTAMP(6) WITH TIME ZONE DEFAULT NOW(),
"name" text NOT NULL,
"quantity" numeric NOT NULL,
"quantity_unit_id" text NOT NULL REFERENCES "units"("id"),
"ingredients" text NOT NULL,
PRIMARY KEY("food_id", "created_time")
);
CREATE TABLE "food_nutrients" (
"food_id" uuid REFERENCES "foods"("id") ON DELETE CASCADE,
"nutrient_id" text REFERENCES "nutrients"("id"),
"created_time" TIMESTAMP(6) WITH TIME ZONE DEFAULT NOW(),
"quantity" numeric NOT NULL,
"quantity_unit_id" text NOT NULL REFERENCES "units"("id"),
PRIMARY KEY("food_id", "nutrient_id", "created_time")
);
CREATE TABLE "food_allergens" (
"food_id" uuid REFERENCES "foods"("id") ON DELETE CASCADE,
"allergen_id" text REFERENCES "allergens"("id"),
"created_time" TIMESTAMP(6) WITH TIME ZONE DEFAULT NOW(),
"status" text NOT NULL, -- 'FREE_FROM', 'MAY_CONTAIN', 'CONTAINS'
PRIMARY KEY("food_id", "allergen_id", "created_time")
);
CREATE TABLE "food_diets" (
"food_id" uuid REFERENCES "foods"("id") ON DELETE CASCADE,
"diet_id" text REFERENCES "diets"("id"),
"created_time" TIMESTAMP(6) WITH TIME ZONE DEFAULT NOW(),
PRIMARY KEY("food_id", "diet_id", "created_time")
);
The data in these tables changes in bulk several times a year when we receive new catalogues (semi-structured spreadsheets) from food manufacturers.
## Our import process
1. Move the data we need from the manufacturer's semi-structured spreadsheet into our own structured spreadsheet. This looks similar to if all tables above were joined together:
| code | name | ... | fat | ... | peanuts | ... | vegetarian | ... |
| ---- | ------------ | --- | --- | --- | ------- | --- | ---------- | --- |
| 1 | Sausage roll | ... | 21 | ... | FALSE | ... | FALSE | ... |
2. For each row in the structured spreadsheet, select the food from the database by code
and compare its fields against those in the spreadsheet. This also requires querying relations (nutrients, allergens, and diets).
3. If fields have changed, determine which tables to insert new rows into.
For brevity, I've not included cases where the update is an addition (e.g. a new food) or removal (e.g. delisting of a food or removal of an allergen).
It's worth noting that these problems are inherent in the supply chain, because manufacturers do not always create new codes when they update existing products.
## Question
**Is there a better design, than the one proposed below, to simplify our process of importing only changed data?**
Join all tables together into one "foods" table, like our structured spreadsheet, and add a column to store a hash for change comparison.
### Pros
- Instead of comparing all fields against those to be imported, we only have to compare the hashes.
- Any change (name, nutrient etc) results in a single transaction/insertion, which is easy reason about.
- We almost always need nutrients, allergens, and diets when we request a food's information.
### Cons
- We are denormalising our data, which will result in a wide table with a large quantity of NULLs.
- Due to denormalisation, we will also have _more_ duplicated data (e.g. a change in name will mean a new version that has the same nutrient values as the one above it). Duplication already happens to some degree.
Check12
(65 rep)
Nov 18, 2023, 03:47 PM
• Last activity: May 25, 2025, 05:00 PM
0
votes
1
answers
262
views
Is it good to save Keywords stored by users in a new DB schema?
My question is more from the schema design and of further performance of the DB. I'm designing a DB schema which will store strings primarily. Actually, users work on these strings, as normal mortals, in macro-enabled Excel workbooks and there, they have one column specifically designed to store key...
My question is more from the schema design and of further performance of the DB.
I'm designing a DB schema which will store strings primarily. Actually, users work on these strings, as normal mortals, in macro-enabled Excel workbooks and there, they have one column specifically designed to store keywords so they can find words within a second column quickly.
Some of these words are obviously deduced. For example, if column 1 is the keywords column and column 2 is the text column, column 2 could say "information required for feature description" and, its keywords in column 1 would be "information", "feature" and "description".
So far so good, common sense, but there are texts that have, for example, "N", "S", "W", "E", as the four points of a compass but in the keyword column, instead of having "North", "South" and so on, the keyword for all is just "compass".
We could ask for a keyword change but it would be a little troublesome, so my question is, is it worth saving these keywords in the schema, whether there's a table for them or not, or is it better to get the keyword as the result of a query from the texts stored?
We know working with strings is difficult, would these affect performance in the future?
BTW, if it helps, I'm thinking of using PostgreSQL.
Edgar Sampere
(53 rep)
Aug 28, 2015, 02:44 PM
• Last activity: May 17, 2025, 10:07 PM
0
votes
1
answers
2605
views
Recover schema and the data in it from accidentally deleted schema
I did something careless. I deleted a schema. I have backups of the \mysql\...\data\refadmin folder so I (hope) I have all the tables and data. That's backed up daily. When I look at work bench, I still have the connection which works, and I can see the schema refadmin but there are no tables or dat...
I did something careless. I deleted a schema. I have backups of the \mysql\...\data\refadmin folder so I (hope) I have all the tables and data. That's backed up daily.
When I look at work bench, I still have the connection which works, and I can see the schema refadmin but there are no tables or data.
I don't have any recent dumps.
The MYSQL server is on Windows 2012 R2. The PC I am using is Windows 10 with Workbench 8. I am using mysql 8.0.0.25 I believe.
Is there a way to recover the tables and data?
thank you.
Citizen1138x
(19 rep)
Apr 26, 2022, 05:57 AM
• Last activity: May 11, 2025, 02:07 AM
1
votes
2
answers
2254
views
Copy a database to a new schema in an existing database
Just curious if there is a way to create a new schema that is based on an existing database. Assuming the existing database has dbo schema only. I will be taking an entire database and moving it to an existing database as a new schema within that database.
Just curious if there is a way to create a new schema that is based on an existing database.
Assuming the existing database has dbo schema only.
I will be taking an entire database and moving it to an existing database as a new schema within that database.
Geoff Dawdy
(1143 rep)
Jul 9, 2019, 09:29 PM
• Last activity: May 4, 2025, 05:02 PM
3
votes
2
answers
315
views
Schema for scarce goods
I have following objects: (Rooms) which are (available in a certain period of time [in days]). During the time of availability they can be (booked) by (users). There are basically these four tables. I know that there is a foreign key in (booked) for (users), I also have a foreign key for (rooms) in...
I have following objects:
(Rooms) which are (available in a certain period of time [in days]).
During the time of availability they can be (booked) by (users). There are basically these four tables. I know that there is a foreign key in (booked) for (users), I also have a foreign key for (rooms) in (booking) and I do have a foreign key of the (room) in (availability). But how can I connect my (availability)-table to avoid that the good is booked on a period of time it isn't available?
The refered question didn't lead to an answer in my matter.
To summarize:
(Rooms) - (availability) - (booking) - (User)
I have a hard time figuring out how I set the foreign keys for this transaction.

empedokles
(171 rep)
Jul 21, 2015, 10:25 PM
• Last activity: Apr 30, 2025, 08:10 PM
3
votes
2
answers
309
views
How to design a database for search history with multiple facets?
I want to show users a history of their searches, which can often be complicated: - **any** of these terms: `[a,b,c]` - **all** of these terms: `[d,e,f]` - **not** these colors: `[h,i,j]` Since these searches are done through a UI (not a search input where they would type "`a OR b etc...`") I assume...
I want to show users a history of their searches, which can often be complicated:
- **any** of these terms:
[a,b,c]
- **all** of these terms: [d,e,f]
- **not** these colors: [h,i,j]
Since these searches are done through a UI (not a search input where they would type "a OR b etc...
") I assume I would need to store each value in a table.
Would it be overkill to create a table for each facet ("search_terms", "search_colors", etc
)? or is there a simpler way?
| search_id | term(_id) | operator(enum)|
|:---------:|:------------:|:-------------:|
| 1 | blue | any |
| 1 | green | any |
| 1 | red | any |
| 1 | musthave | all |
| 1 | exclude_me | not |
| 1 | exclude_also | not |
Some other considerations are:
* The web-URL can change over time, so I don't want to store that as an easy way out.
* The index/3rd party service I'm searching could change, so I similarly can't save the query I make.
(The terms themselves can be its own table with a reference instead).
d-_-b
(131 rep)
Jun 3, 2019, 10:20 PM
• Last activity: Apr 29, 2025, 09:09 PM
1
votes
1
answers
565
views
Is there a way to specify which schema the database task "Export Data-tier Application" exports for the other entities besides the tables?
I noticed in the Export Data-tier Application database task, you can specify which schema to export data from in regards to tables, but I also want to exclude certain schemas for the other object types too such as views and stored procedures. Is this possible?
I noticed in the Export Data-tier Application database task, you can specify which schema to export data from in regards to tables, but I also want to exclude certain schemas for the other object types too such as views and stored procedures. Is this possible?
J.D.
(40893 rep)
Feb 13, 2020, 06:51 PM
• Last activity: Apr 26, 2025, 05:05 PM
1
votes
0
answers
29
views
appropriate schema modelling a dimensional model for questionnaires
I am designing a snowflake (star, constellation) schema to model a questionnaire enabling aggregation of questions. 1) A template can have multiple questionnaires. 2) A questionnaire can have only one template. 3) A questionnaire can have multiple questions (multiple choice, rating, text). 4) A ques...
I am designing a snowflake (star, constellation) schema to model a questionnaire enabling aggregation of questions.
1) A template can have multiple questionnaires.
2) A questionnaire can have only one template.
3) A questionnaire can have multiple questions (multiple choice, rating, text).
4) A question can have multiple answers.
A (questionnaire) template can be altered. Answers do not change.
What is an appropriate schema?
Bennimi
(165 rep)
Apr 11, 2025, 12:28 PM
• Last activity: Apr 16, 2025, 01:39 PM
Showing page 1 of 20 total questions