Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
265 views
Is it good to save Keywords stored by users in a new DB schema?
My question is more from the schema design and of further performance of the DB. I'm designing a DB schema which will store strings primarily. Actually, users work on these strings, as normal mortals, in macro-enabled Excel workbooks and there, they have one column specifically designed to store key...
My question is more from the schema design and of further performance of the DB. I'm designing a DB schema which will store strings primarily. Actually, users work on these strings, as normal mortals, in macro-enabled Excel workbooks and there, they have one column specifically designed to store keywords so they can find words within a second column quickly. Some of these words are obviously deduced. For example, if column 1 is the keywords column and column 2 is the text column, column 2 could say "information required for feature description" and, its keywords in column 1 would be "information", "feature" and "description". So far so good, common sense, but there are texts that have, for example, "N", "S", "W", "E", as the four points of a compass but in the keyword column, instead of having "North", "South" and so on, the keyword for all is just "compass". We could ask for a keyword change but it would be a little troublesome, so my question is, is it worth saving these keywords in the schema, whether there's a table for them or not, or is it better to get the keyword as the result of a query from the texts stored? We know working with strings is difficult, would these affect performance in the future? BTW, if it helps, I'm thinking of using PostgreSQL.
Edgar Sampere (53 rep)
Aug 28, 2015, 02:44 PM • Last activity: Sep 18, 2025, 03:03 PM
0 votes
1 answers
133 views
Associating transactions with payouts in marketplace
I'm developing a marketplace-style application that allows users to upload purchasable digital items -> the public purchases these items -> and for my application to pay the users (owners of items) their owed funds via PayPal Payouts API on a daily basis. I'm struggling with how best to calculate/st...
I'm developing a marketplace-style application that allows users to upload purchasable digital items -> the public purchases these items -> and for my application to pay the users (owners of items) their owed funds via PayPal Payouts API on a daily basis. I'm struggling with how best to calculate/store the owing balance, and how to map the individual purchase transaction records to the concept of a "payout" (when we send owed funds to the user). Schema so far: **User** - id - name - createdAt - etc. **Purchasable Item** - id - user_id (owner) - price - createdAt - etc. **Transaction** - id - type ("purchase" or "payout") - status (depending on PayPal response. COMPLETED, FAILED, REFUNDED etc.) - value (integer (lowest demomination of currency). Positive integer for purchase, negative for a payout). - purchasable_id (For "purchase" transactions, reference the ID of the purchasable item that was purchased) - transaction_fee - createdAt - **payout_id** (?) The ID of the payout (below) this purchase is included in. Not sure about this. This won't be known at the time of the transaction, so it would need to be updated to store it and I'm not sure how to know which transaction will belong in which payout? **Payout** Not sure about this. Feels like a duplicate of a payout transaction entry, but I want a way to store which purchase transactions were paid out in which payouts. - id - status (depending on PayPal response to Payout API webhook. COMPLETED, FAILED, REFUNDED etc.) - createdAt **Logic:** This is where I need the most help. **CRON job. Every 24hrs:** - Calculate each users balance by summing the payout_balance_change fields of the Transactions table. i.e balance isn't stored, it's always calculated. **Is that a good idea?** - Insert a row into "Transactions" of type "payout" with a negative "payout_balance_change". i.e. subtracting the amount we will send in the payout, zeroing their balance in the Transactions table. - Insert a row into "Payouts" table that stores the details of the payout attempt. **Problems:** - **How will I know which purchase transactions belong to each payout cycle** (so I can then store the payout_id in those transaction records). I could use the date of the transaction, and each payout could be for the 24hr period prior to the CRON job? I'm flexible on this and not sure what the most robust logic would be. Any advice on how best to structure this, or links to similar projects would be greatly appreciated. Thank you!
Jim Stevens (1 rep)
Jun 6, 2022, 06:59 AM • Last activity: Sep 18, 2025, 11:08 AM
13 votes
2 answers
15961 views
Postgres, benefit of named schema vs public schema?
I am new to Postgres and I am trying to understand how schemas fit into everything and what is the benefit of a non-'public' schema. For example, does it provide security, access control benefits or something else? The setup I am working with is one where postgres owns the database and tables, but g...
I am new to Postgres and I am trying to understand how schemas fit into everything and what is the benefit of a non-'public' schema. For example, does it provide security, access control benefits or something else? The setup I am working with is one where postgres owns the database and tables, but grants permissions to another user to select, insert, update, and delete. Would creating the tables in the 'public' schema vs 'myotherschema' make much of a difference?
Andre M (315 rep)
Oct 19, 2017, 05:24 PM • Last activity: Sep 18, 2025, 07:41 AM
0 votes
1 answers
2642 views
Recover schema and the data in it from accidentally deleted schema
I did something careless. I deleted a schema. I have backups of the \mysql\...\data\refadmin folder so I (hope) I have all the tables and data. That's backed up daily. When I look at work bench, I still have the connection which works, and I can see the schema refadmin but there are no tables or dat...
I did something careless. I deleted a schema. I have backups of the \mysql\...\data\refadmin folder so I (hope) I have all the tables and data. That's backed up daily. When I look at work bench, I still have the connection which works, and I can see the schema refadmin but there are no tables or data. I don't have any recent dumps. The MYSQL server is on Windows 2012 R2. The PC I am using is Windows 10 with Workbench 8. I am using mysql 8.0.0.25 I believe. Is there a way to recover the tables and data? thank you.
Citizen1138x (19 rep)
Apr 26, 2022, 05:57 AM • Last activity: Sep 16, 2025, 04:10 AM
1 votes
2 answers
2264 views
Copy a database to a new schema in an existing database
Just curious if there is a way to create a new schema that is based on an existing database. Assuming the existing database has dbo schema only. I will be taking an entire database and moving it to an existing database as a new schema within that database.
Just curious if there is a way to create a new schema that is based on an existing database. Assuming the existing database has dbo schema only. I will be taking an entire database and moving it to an existing database as a new schema within that database.
Geoff Dawdy (1143 rep)
Jul 9, 2019, 09:29 PM • Last activity: Sep 9, 2025, 03:02 PM
1 votes
0 answers
1361 views
Am I on the right track for a customer user login database design?
I am designing a back end database that will handle registration and login. I need to be able to deal with different account and authentication types. The main issue here, that is different from other database design for user login is that a user could have a password to log in, a smart card, both o...
I am designing a back end database that will handle registration and login. I need to be able to deal with different account and authentication types. The main issue here, that is different from other database design for user login is that a user could have a password to log in, a smart card, both or a combination of other, and I am unsure about null values etc. - Imagine a user registers their company online (on my website), the company details go into my company table, and their details go into my user table. - They then get the "account type" as company administrator (account type table). - That user (company administrator) can now log into their company account online, and register multiple users (in their company). The issues I have are below, and hopefully my design solves these but I am not sure if I am on the right track. - Each user could have either a password, or a smart card to log in. I have the relationship for password and smart card tables but this means there will be many null objects. Either a user will have a password or a smart card. - Just to clarify my needs. When a user registers the company, they become a company administrator, but they could also set up other administrators and/or users, so that why I have them all in different tables. - I have added further tables as each user must have one or more forms of ID checked, along with being provided one or more access codes that will provide a range of access levels (physical and computer resource based) - I am mainly concerned that I have the right type of database normalization AND relationship design. - Lastly, the Smart Card table will have multiple smart card entries that I create, and I need to be able to assign 1 or more to a company, then, the company administrator can assign one or more of these to a user belonging to that company. Questions: - Are my relations OK for this? - What about null values until I have the smart cards assigned, is this OK? - Any help and guidance would be great. And just to confirm, this will be a back end SQL database on my server, accessed by online web app AND end user desktop applications written in C#enter image description here
Stephen D (31 rep)
Jan 25, 2019, 11:20 PM • Last activity: Sep 4, 2025, 12:45 PM
3 votes
2 answers
315 views
How to design a database for search history with multiple facets?
I want to show users a history of their searches, which can often be complicated: - **any** of these terms: `[a,b,c]` - **all** of these terms: `[d,e,f]` - **not** these colors: `[h,i,j]` Since these searches are done through a UI (not a search input where they would type "`a OR b etc...`") I assume...
I want to show users a history of their searches, which can often be complicated: - **any** of these terms: [a,b,c] - **all** of these terms: [d,e,f] - **not** these colors: [h,i,j] Since these searches are done through a UI (not a search input where they would type "a OR b etc...") I assume I would need to store each value in a table. Would it be overkill to create a table for each facet ("search_terms", "search_colors", etc)? or is there a simpler way?
| search_id |   term(_id)  | operator(enum)|
|:---------:|:------------:|:-------------:|
| 1         | blue         |     any       |
| 1         | green        |     any       |
| 1         | red          |     any       |
| 1         | musthave     |     all       |
| 1         | exclude_me   |     not       |
| 1         | exclude_also |     not       |
Some other considerations are: * The web-URL can change over time, so I don't want to store that as an easy way out. * The index/3rd party service I'm searching could change, so I similarly can't save the query I make. (The terms themselves can be its own table with a reference instead).
d-_-b (131 rep)
Jun 3, 2019, 10:20 PM • Last activity: Aug 30, 2025, 08:07 AM
1 votes
1 answers
576 views
Is there a way to specify which schema the database task "Export Data-tier Application" exports for the other entities besides the tables?
I noticed in the Export Data-tier Application database task, you can specify which schema to export data from in regards to tables, but I also want to exclude certain schemas for the other object types too such as views and stored procedures. Is this possible?
I noticed in the Export Data-tier Application database task, you can specify which schema to export data from in regards to tables, but I also want to exclude certain schemas for the other object types too such as views and stored procedures. Is this possible?
J.D. (41063 rep)
Feb 13, 2020, 06:51 PM • Last activity: Aug 29, 2025, 04:07 PM
0 votes
2 answers
137 views
Database layout to keep track of the "freshness" of a column value in a specific row
I have a `Person` with columns like `name`, `address` and `bio`. How do I store data in order to know the "freshness" of these columns? For instance, *"for the Person Y, the name has changed 2 days ago, and the location 2 weeks ago"*. Is this simply a versioning problem? ``` Person v1 - name: A1, ad...
I have a Person with columns like name, address and bio. How do I store data in order to know the "freshness" of these columns? For instance, *"for the Person Y, the name has changed 2 days ago, and the location 2 weeks ago"*. Is this simply a versioning problem?
Person v1 - name: A1, address: B1, bio: C1, updated_at: 4.days.ago
Person v2 - name: A2, address: B1, bio: C1, updated_at: 3.days.ago
Person v3 - name: A2, address: B1, bio: C2, updated_at: 1.days.ago
And after I look trough different versions to find the last one with a different column? Or is there better ways to handle that? Put freshness data in another table? Or even DB? I will normally use MySQL, unless this can be solved by means of another database management system. Regards. ### Additional details Thank you for you comments and answers, but I was forgetting something: Actually, I need to know when the column value was updated, but by who as well. Something like *"updated at XX by User Y"*.
den (1 rep)
Aug 8, 2019, 01:49 PM • Last activity: Aug 29, 2025, 02:08 AM
3 votes
1 answers
1706 views
SQL Server 2016 - Moving table across schemas - Error: Cannot transfer the object
Before posting this, I searched both on Stack Overflow and the web in general. My task is simple: I would like to move table A from schema X to schema Y. When trying it out like this: ALTER SCHEMA Y TRANSFER X.A I get the following error: > Cannot transfer the object 'A', because it does not exist o...
Before posting this, I searched both on Stack Overflow and the web in general. My task is simple: I would like to move table A from schema X to schema Y. When trying it out like this: ALTER SCHEMA Y TRANSFER X.A I get the following error: > Cannot transfer the object 'A', because it does not exist or you do not have permission. * I checked multiple times that both the schemas and the table exist * Environment: SQL Server 2016 * Table A was copied over from a different database on a different server, using SQL Server Export/Import wizard * I have similar permissions on both schemas * All permissions are inherited via an Active Directory group * Earlier in the day, I executed the same command for a different table and it worked perfectly fine. * Table A contains nearly 280 million rows. So copying via temp table or table scripting is not efficient * The MSDN article mentions that "*To transfer a securable from another schema, the current user must have CONTROL permission on the securable (not schema) and ALTER permission on the target schema*" https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-schema-transact-sql?view=sql-server-ver15 - I am not sure how to check for CONTROL permission. I tried sp_table_privileges and fn_my_permissions. Could not find anything useful. What am I missing?
Venkat (131 rep)
Jul 27, 2021, 04:30 PM • Last activity: Aug 25, 2025, 09:04 PM
0 votes
0 answers
25 views
Location Hierarchy representing production rooms or first-class entity?
I'm struggling to figure out the best way to model Production Rooms in our manufacturing ERP app. Currently, we have a Locations model which is a hierarchical, materialized-path (Ancestry Gem) implementation and some of those locations represent the location of the production room for purposes of mo...
I'm struggling to figure out the best way to model Production Rooms in our manufacturing ERP app. Currently, we have a Locations model which is a hierarchical, materialized-path (Ancestry Gem) implementation and some of those locations represent the location of the production room for purposes of moving pallets/inventory around. CREATE TABLE public.locations ( id bigint NOT NULL, name character varying, code character varying, description character varying, created_at timestamp(6) without time zone NOT NULL, updated_at timestamp(6) without time zone NOT NULL, ancestry character varying, depth_inches integer, height_inches integer, width_inches integer, site_id bigint, kind character varying, ancestry_depth integer DEFAULT 0 NOT NULL, is_active boolean DEFAULT true NOT NULL ); Should locations represent the production rooms by having a type column with a value of production_room or should I have a Production Room model/table that references the locations?
karns (103 rep)
Aug 25, 2025, 01:12 PM
-1 votes
1 answers
716 views
How to implement database schema for tracking the status of a purchase?
It's an internet stop, physical products with delivery. I'm trying to figure out how to implement transactions/payments, fulfilment, shipping, payments, refunds, etc. Namely, a status of each sale. **Approach #1:** create a column "status" in "sales". It could the following values: new, payment_pend...
It's an internet stop, physical products with delivery. I'm trying to figure out how to implement transactions/payments, fulfilment, shipping, payments, refunds, etc. Namely, a status of each sale. **Approach #1:** create a column "status" in "sales". It could the following values: new, payment_pending, paid, fulfilment_started, fulfilled, shipping_started, shipping_in_progress, delivered_successfully It may also contain: refund_claimed, refund_in_progress, refunded. This isn't the complete list of statuses **Approach #2:** create tables for each process: payment_logs (id, sales_id, status, details) shipping_logs (id, sales_id, name, status, details) refund_logs (id, sales_id, status, details) The approach #2 is more flexible. Yet, there's a downside: how would I know the current global status of a sale? Is it being delivered? Or Is it still being fulfilled? Or is a payment pending for it still? That is, it's not even proceeded to fulfilment, nor delivery procedures. To solve that, I'd have to introduce logic to this approach: search in "payment_logs". Found, successful? -> proceed to "fulfilment_logs". Found, but not yet "successfully_fulfilled"? Then the global status of a "sale" will be whatever the current status in "fulfilment_logs" is. Will this, the approach #2, work nicely? Are there any other downsides? What would you propose?
Dajaku (39 rep)
May 7, 2020, 07:30 AM • Last activity: Aug 16, 2025, 11:06 PM
1 votes
1 answers
392 views
How to handle a table schema with a new relationship subtype in an interviews scenario?
I have a table of "interviews" and "interview reviews" for both the interviewer and the interviewee. When the project was started there was only the need for one type of review, which is shown below [![enter image description here][1]][1] But now we have added a new type of interview which would req...
I have a table of "interviews" and "interview reviews" for both the interviewer and the interviewee. When the project was started there was only the need for one type of review, which is shown below enter image description here But now we have added a new type of interview which would require different types of reviews based in the interview type. I have added a many to many (M:N) relationship to store the interview type, but I am stumped as to the best way to go about adding multiple review types... enter image description here One solution I thought of was to add another table for reviews with the many to one (M:1) relationship which would contain all the data for the new review type. This would mean that the application would have to conditionally look for the review based on the type of interview, and it would lead to blank columns for all the unused review types... How would you go about structuring this?
Joff (109 rep)
Sep 13, 2018, 02:24 AM • Last activity: Aug 16, 2025, 10:06 AM
4 votes
1 answers
846 views
Work Packages entity: Relational vs NoSQL Database Design? Should it reference itself?
This database holds assembly instructions data for shipbuilding, later used in a Augmented Reality Android application. The application asks the server (Node.JS) for the necessary files. Work Packages contain several Parts that must be mounted, each of these parts has the Files necessary needed by t...
This database holds assembly instructions data for shipbuilding, later used in a Augmented Reality Android application. The application asks the server (Node.JS) for the necessary files. Work Packages contain several Parts that must be mounted, each of these parts has the Files necessary needed by the application. Database Structure Some Work Packages depend on another (eg. Work Package B1). They have a mount order, and it is possible to assign a worker to an individual work package, or see if the worker has the qualifications to mount the work package, by using a Type attribute. Relational Database Implementation: Relational Database NoSQL Database Implementation: Workers { "username": "worker1", "password": "pass1", "type": ["electrical", "general"] }, { "username": "worker2", "password": "pass2", "type": ["pipes", "general", "welding"] } Work Packages { "id": 2, "Name": "2" "MountOrder": 2, "PackageDependecies": Referenced image, }, { "id": 1, "Name": "1" "MountOrder": 1, "Type": "Electrical", "Worker:" ["worker1"] "Parts": [ { "Name:" "part1", "Files": [ { "Type": "obj", "File": BLOB.obj }, { "Type": "stl", "File": BLOB.stl } ] }, { "Name:" "part2", "Files": [ { "Type": "obj", "File": BLOB.obj }, { "Type": "stl", "File": BLOB.stl } ] } ] } Why is one implementation the best? Why should or shouldn't a Work Package reference the above Work Packages? There will be substantial reads but considerably fewer writes.
Rui Rebelo Brito (51 rep)
Aug 31, 2015, 10:43 PM • Last activity: Aug 13, 2025, 07:12 PM
2 votes
1 answers
508 views
Handling unique IDs on client side
Reading around on the topic hasn't got me closer to understanding best practices. I'm also relatively new and self-taught in programming, so might have some questions about how to actually execute the solution. The app I'm building requires a lot of DOM elements to be generated based on database row...
Reading around on the topic hasn't got me closer to understanding best practices. I'm also relatively new and self-taught in programming, so might have some questions about how to actually execute the solution. The app I'm building requires a lot of DOM elements to be generated based on database rows. These should be updatable by the user and at the minute I'm using the auto incrementing primary key as an html attribute id. Obviously I don't want users to modify this, or in the associated javascript object, to then affect other users' data when saved or updated. I have considered the following options: 1. Not using auto incrementing but a uuid or guid, but I've read these are not all that advisable, especially for db performance. 2. Concealing the client side id by hashing it when served on the server side. This also doesn't sound particularly performant. 3. Having a 'lookup table' that is referenced and produces another, non-incremental id. Doesn't seem like a bad idea but I'm a bit unsure how best to go about generating this and how it should look. 4. Including an extra column in the tables that would be updated that contains for example the users name and would be combined with the primary key. This seems simple enough but the idea of adding this extra column to every table that might be updated sounds redundant and not normalized, etc. 5. Keeping the primary keys visible but producing a session table for the user that tracks what the which IDs were sent out and therefore which can be updated. New rows can be generated so long as other ones belonging to other users aren't updated. Or, maybe I'm barking up the wrong tress here :) Thanks!
Mikey B (21 rep)
Sep 1, 2018, 12:39 PM • Last activity: Aug 10, 2025, 01:06 PM
0 votes
1 answers
3256 views
Understanding a double entry accounting DB Schema
This question is related to a previous question "[Why the Accounting DB Schemas do not allow to maintain a clean Products Table?][1]", asked on this network by someone else. Assuming that > the database schema of the vast majority of Accounting Systems follows this architecture is true, and that mos...
This question is related to a previous question "Why the Accounting DB Schemas do not allow to maintain a clean Products Table? ", asked on this network by someone else. Assuming that > the database schema of the vast majority of Accounting Systems follows this architecture is true, and that most DB administrators seem to agree due to the lack of counter arguments on said previous question, and I am also trying to understand dual entry bookkeeping database principles (without having used many myself), I have specific questions about specifically this schema. *Disclaimer: These are twelve questions, I think it would be impractical to post twelve separate questions on stackexchange. I am therefore posting it as one question under the purpose of understanding this accounting schema.* (Here is the image again for reference.) Schema of vast majority of Accounting Systems 1. Account table contain records of what general accounting understands as ledger accounts (such as: rent, internet, sales, purchases....), correct? 2. Journal table contains records of what general accounting understands as journal entries (such as: john paid 100gbp cash into bank account, jane agreed invoice 123 at 50gbp, james purchased desk 100gbp, ...), correct? 3. Why does journal table have default_credit_account_id and default_debit_account_id, which is what I would believe the ledger account would prescribe, not a journal entry ? 4. The account table records contain credit, debit and balance fields, which I assume are derived totals, calculcated from the sum of related journal entries, correct? 5. A bank statement (such as one that you download as CSV from online banking) has debit, credit and balance columns. Each bank statement row, matches (in principle) a record in the move table, correct? 6. Each paper bank statement item (matched with a matching move record), can be split into multiple transaction purposes (eg. james pays 100gbp into bank for 5 separate past invoices), and hence the move_line table splits a move record (a bank statement row) into parts that match the intended journal records, correct? 7. How is a move_line debit/credit related? I understand that this is a bit more intricate, as a bank credit/debit is not necessarily the same (credit or debit) in the company; it depends on what sort of ledger account the move_line record is associated with, is this thought correct? 8. If the choice of debit/credit inside a move_line record is related to it's associated account record (via account_id), then what is the purpose of default_credit_account_id and default_debit_account_id (to which the move_line record is related to)? 9. Given that there is a balance field in the move_line table, and that the balance depends on previous move_line records (related to the same move record), what defines the order of move_line records? It seems to me that date_posted for all move_line records would be the same as related to the move record? Even if that is not the case, a date without a time seems a bit... large for an order precision? 10. Why are all currency fields defined to 4 decimal places below 1 instead of 2 decimal places corresponding to cents? Does this have to do with database float imprecision? 11. If accounting software often need to summarize the debit and credit fields of large sets or records, wouldnt it be better to define debit and credit fields as uints and units of cents, instead of computationally expensive floats? 12. The meaning of a fiscalyear record, is understood to be eg. an accounting financial year for the purposes of accounting taxation years. Why is period table there? If this was to sub-divide a fiscal year into months or weeks, why isn't there a period_name field? What is the intention of this period table?
Florian Mertens (109 rep)
Dec 27, 2021, 04:09 PM • Last activity: Aug 7, 2025, 03:04 PM
2 votes
1 answers
1475 views
How to model a database for prices that vary depending on their categories and parameters?
My first post on DBA! Apologies in advance if I made any mistake. Before jumping into the schema and tables, I would like to share what I am trying to achieve first. I am working on sort of a courier application, where I have some `categories` and each category has a pre-defined `price`. But determi...
My first post on DBA! Apologies in advance if I made any mistake. Before jumping into the schema and tables, I would like to share what I am trying to achieve first. I am working on sort of a courier application, where I have some categories and each category has a pre-defined price. But determining the price is a bit ugly (absence of symmetry and patterns; at least, I can't seem to find any) . I will give you an example: Consider the following categories: Document, Heavy Document, Laptop, Carton, Heavy Carton. 1) **Document:** It's for the lighter documents, which are below 0.5kg. The price is 20$, fixed. *[price stored in the prices table: 20.00]* > e.g. For an item of 300g, the price will be 20$. 2) **Heavy Document:** This is for the documents that are over 0.5kg. Unlike the Document category, it doesn't have a fixed price! Rather, it has a unit price: 10$ per kg, which will be applied to each kg exceeding 0.5kg. *[price stored in the prices table: 10.00]* > e.g. For an item of 2kg, the price will be 35$ (1.5g = 15$ + 0.5 = 20$) 3) **Laptop:** Straightforward, 100$. Nothing special about it, no constraint whatsoever. *[price stored in the prices table: 100.00]* > e.g. For an item of 2kg, the price will be 35$ (1.5g = 15$ + 0.5 = 20$) 4) **Carton:** Here comes an interesting one. Until now, there was only one dependency: weight. But this one has an additional dependency: dimension. This is somewhat similar to the Document category. For the cartons that are below 3 Cubic Feet(CF), the price is 80$ per CF. The difference between Document and Carton category is that the Document has a fixed price, whereas Carton has a Unit Price. But wait, there's more. There is an additional constraint: dimension-weight ratio. In this case, it is 7kg per CF. And if the item's weight crosses the ratio, for each extra kg 5$ will be charged. It's so confusing, I know. An example might help: [price stored in the prices table: 80.00] > e.g. For a carton of 80kg and 2CF; the price will be 490$. Here is how: First calculate the regular charge: 80$*2CF = 160$ Now let's figure out if it crosses **Ratio**: Since, 1 CF = 7kg, hence, 2CF = 14kg. But the item's weight is 80kg, so it *crosses the ratio (14kg)* Since it crosses the ratio, for all the extra kgs (80-14 = 66kg), each kg will cost 5$: 66*5 = 330$. After adding it with regular charge: 330$+160$ = 490$. 5) **Heavy Carton:** This one is for the cartons having the dimension bigger than 3CF. The difference with Carton is the unit price. Heavy Carton is 60$ per CF. [price stored in the prices table: 60.00] > e.g. For a carton of 80kg and 5CF; the price will be 525$. Here is how: First calculate the regular charge: 60$*5CF = 300$ Now let's figure out if it crosses **Ratio**: Since, 1 CF = 7kg, hence, 5CF = 35kg. But the item's weight is 80kg, so it *crosses the ratio (35kg)* Since it crosses the ratio, for all the extra kgs (80-35 = 45kg), each kg will cost 5$: 45*5 = 225$. After adding it with regular charge: 300$+225$ = 325$. If you've read this far, I think I have convinced you that the business structure is really complicated. Now let's take a look at my categories schema: +-------------------------+---------------------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------------+---------------------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | name | varchar(191) | NO | | NULL | | | created_at | timestamp | YES | | NULL | | | updated_at | timestamp | YES | | NULL | | | dim_dependency | tinyint(1) | NO | | NULL | | | weight_dependency | tinyint(1) | NO | | NULL | | | distance_dependency | tinyint(1) | NO | | NULL | | | dim_weight_ratio | varchar(191) | YES | | NULL | | | constraint_value | decimal(8,2) | YES | | NULL | | | constraint_on | enum('weight','dim') | YES | | NULL | | | size | enum('short','regular','large') | YES | | regular | | | over_ratio_price_per_kg | decimal(8,2) | YES | | NULL | | | deleted_at | timestamp | YES | | NULL | | +-------------------------+---------------------------------+------+-----+---------+----------------+ Also the schema of prices table (it's a polymorphic table, hoping to create a subcategories table someday): +----------------+---------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+---------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | amount | decimal(8,2) | NO | | NULL | | | created_at | timestamp | YES | | NULL | | | updated_at | timestamp | YES | | NULL | | | priceable_type | varchar(191) | NO | MUL | NULL | | | priceable_id | bigint(20) unsigned | NO | | NULL | | | deleted_at | timestamp | YES | | NULL | | +----------------+---------------------+------+-----+---------+----------------+ How can I improve this structure to keep things as dynamic and coherent as possible?
Eisenheim (121 rep)
Aug 11, 2018, 09:37 AM • Last activity: Aug 5, 2025, 08:06 AM
1 votes
1 answers
148 views
How should I design my databas for online voting web app?
Currently I'm trying to build web app for online voting where registered user can create their own Election. For the database, every time user create an election should the system create new required table for the election or I could just some tables to hold everything ? 1. case 1 (create required t...
Currently I'm trying to build web app for online voting where registered user can create their own Election. For the database, every time user create an election should the system create new required table for the election or I could just some tables to hold everything ? 1. case 1 (create required table for each election ) : - user_table (id,name,email,etc) - election_table (user_id, election_id,etc) per election created by user above, below tables will be created in database with electionId is part of table name : - voter_electionId_table (voter_id,name,email,etc) - candidate_electionId_table (candidate_id,no,name) - ballot_electionId_table(vote_id,candidate_id) example expected result : enter image description here 1. case 2 ( same tables for every election) : 2. user_table (id,name,email,etc) 3. election_table (user_id, election_id,etc) 4. voter_table(voter_id,election_id,etc) 5. candidate_table (electionID,candidate_id,no,name,etc) 6. ballot_table(electionId,vote_id,candidate_id) expected result : enter image description here which from above is better ? or none of them is good (If yes, please share your thought).
xedox_za (11 rep)
Nov 29, 2023, 01:57 AM • Last activity: Aug 3, 2025, 06:07 AM
1 votes
1 answers
520 views
What is the impact of DROP SCHEMA on concurrent read only workloads in PostgreSQL?
I have an ELT process that deals with a pretty small amount of data. I would like to have 1 schema called `prod` with the results of the previous ELT run and 1 schema called `dev` where I will perform the transformations of the current ELT run. Once the current ELT run is complete I would like to ma...
I have an ELT process that deals with a pretty small amount of data. I would like to have 1 schema called prod with the results of the previous ELT run and 1 schema called dev where I will perform the transformations of the current ELT run. Once the current ELT run is complete I would like to make the switch by running something along the lines of: start transaction; drop schema prod; alter schema dev rename to prod; end transaction; All queries issued outside of the ELT process are read only. How would MVCC handle this? What would happen to read only queries that were running when the drop schema command is issued? What would happen to read only queries issued after the drop schema command? What about after the end of the transaction?
foobar0100 (641 rep)
May 2, 2016, 03:19 AM • Last activity: Jul 28, 2025, 04:05 PM
0 votes
1 answers
145 views
Storing revisions and relations between revised items in 1:N and N:N scenarios for audits
I have several of the following situations in a MySQL schema that I design for a custom CRM/ERP solution. 1. 1:N related tables `SO` and `SO_ITEMS` (SO is just one example I have several) that both need to have revisions. The data sets stored tend to be get pretty big for some of these tables and, a...
I have several of the following situations in a MySQL schema that I design for a custom CRM/ERP solution. 1. 1:N related tables SO and SO_ITEMS (SO is just one example I have several) that both need to have revisions. The data sets stored tend to be get pretty big for some of these tables and, according to the business rules I gathered, there will be many changes that need auditing so copying all the child items for each revision of the parent will not scale very well I predict. 2. N:N related tables SO and PO tables (where I use a pivot table of course) that needs to be audited also so that changes from at least one way imply (or may imply) storing an updated revision of the data entry form the other table. (We need no worry about the child _items data entries for each of these tables because although they may be related, I don't require auditing for now). My strategy for now is to use a separate table suffixed with _revision for each of the above tables and store the current revision number in the normal table and a revision number for each revision, at least for the parents. So my question are: * is my schema flawed in some major way that I don't predict for this kind of scenario? (I suppose there are many other applications that have solved this problem out there). * should I make a direct relationship between x_revison and y_revision or use the revision number and query through the normal (current revision?) table? I want the variant that reduces complexity the most and I'm pretty sure querying through is simpler on most operations (I suspect there will not be a big disparity between SELECT/INSERT/UPDATE in normal operation, so I don't need to optimise for one type of DB operation). Am I missing something? * is it common practice to copy-on-write the revisions or should I just lazily insert entries in the revisions table when a change appears only? Any possible pros/cons to these two options.
Byakkun (1 rep)
May 11, 2016, 09:52 AM • Last activity: Jul 20, 2025, 12:08 AM
Showing page 1 of 20 total questions