Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
217 views
Migrate legacy application to new db/schema
I apologize if I made any mistake/missing info, fist post here! I am in the process of rewriting a website from scratch to a newer tech stack with a completely redesigned architecture. The website in question is a real estate website to rent houses for short period of times. It was designed/created...
I apologize if I made any mistake/missing info, fist post here! I am in the process of rewriting a website from scratch to a newer tech stack with a completely redesigned architecture. The website in question is a real estate website to rent houses for short period of times. It was designed/created in 1998-1999 and has been working ever since. A few cosmetic redesigns have been made to the website, but no changes have been made to the database or the architecture except for hosting-mandated updates to the infrastrcture (they run on one of those shared spaces). The current stack is as follows: * DB: MySQL (10.1.41-MariaDB-cll-lve) * Server: PHP 7.0 * Client (Web): Simple Bootstrap website, nothing fancy The system was originally designed by a student, which had no idea/experience with these kind of systems. The structure of the DB is quite flat, and there are just a few tables (< 5). There are almost no joins in the code, and as an example their booking periods are all put together in the same field, with the dates split by delimiters and concatenated in the format
DD:MM:YYYY-DD:MM:YYYY/DD:MM:YYYY-DD:MM:YYYY/...
So whenever they need to check for availability, they parse this string. Also, data types are scarcely used and most of the fields are strings. I have rewritten the entire system with a new architecture and schema using relations whenever possible as well as adding quite a few tables. This is the stack: * DB: PostgreSQL * Server: Spring Boot (latest) with Spring Data * Client (Web): Vue.js I now need to migrate the old data to the new system, and I am not sure how to proceed. I have thought of two different options: 1. Write a script that takes all the data from the DB, creates the relations and adapts to the new schema, and stores them in the new DB (**EDIT**: The script could also parse the dates, etc). Of course this needs to be properly tested but I am not in a time crunch. The script could be written in either Python (to allow me to manipulate the fields without having to deal with types) or using Java (being as I use Spring) and using the new models directly. 2. Extract the data from MySQL and dump it into PostgreSQL. Then, write some kind of an adapter that allows me to read the data and manipulate it little by little whenever needed. I am afraid of this option as Spring Data is quite strict at times and I would need to duplicate a lot of the models I have created. Keep in mind the following things: 1. There are about 5000 entries between all the tables, so not a huge migration. Still, they have a lot of information that needs to be accurate and cannot be lost. 2. Spring Data complains quite easily about foreign key restrictions if you touch the schema in any way (i.e. manually) and therefore I am afraid of doing all the work and then having to deactivate safety features. 3. The migration will be done only once. Once verified, we will keep a copy of the old database just to be sure but everything else will be moved to the new system. **EDIT:** As pointed out in the comments, I forgot to ask the actual question :). Are any of the options I listed a good way to approach this problem? Are there any tools out there that may help me in this migration? I think that option 1 is the best for my case, am I missing something? Any help is greatly appreciated!
ruben1691 (101 rep)
Nov 14, 2019, 03:31 PM • Last activity: Jun 22, 2025, 03:01 AM
0 votes
1 answers
1177 views
The SELECT permission was denied on the object '__EFMigrationsHistory', database 'master', schema 'dbo'
I'm trying to create asp.net core web api. (https://youtu.be/EmV_IBYIlyo?si=GcPVN7BuautDy6-j&t=1351) At the point where he executes Add-Migration InitialCreate and Update-database my console spits out error: > The SELECT permission was denied on the object > '__EFMigrationsHistory', database 'master...
I'm trying to create asp.net core web api. (https://youtu.be/EmV_IBYIlyo?si=GcPVN7BuautDy6-j&t=1351) At the point where he executes Add-Migration InitialCreate and Update-database my console spits out error: > The SELECT permission was denied on the object > '__EFMigrationsHistory', database 'master', schema 'dbo'. Database is an AzureSQL. It's fresh and new. I use administrator (aka owner) credentials. My question is: How correctly configure user that could execute and have permission to SELECT and INSERT data into __EFMigrationsHistory table. I also tried to follow this document (https://learn.microsoft.com/en-us/azure/azure-sql/database/logins-create-manage?view=azuresql) , however if I try to do any actions with ALTER SERVER ROLE it says that I don't have permissions. Thank you.
Lukas Čaplikas (1 rep)
Aug 24, 2023, 10:22 AM • Last activity: May 6, 2025, 05:03 AM
2 votes
2 answers
53 views
Transferring schema from MySql to MariaDB
- My development environment has XAMPP, which means I have MariaDB installed. - The production has MySQL 8 installed. 1. When I transfer new schema changes from MariaDB to MySQL, it works mostly. I have to fiddle sometimes. 2. But my main problem is when I backup the MySQL schema and data and try to...
- My development environment has XAMPP, which means I have MariaDB installed. - The production has MySQL 8 installed. 1. When I transfer new schema changes from MariaDB to MySQL, it works mostly. I have to fiddle sometimes. 2. But my main problem is when I backup the MySQL schema and data and try to restore it to my PC, well I have given up. It requires too many changes. Many issues are related to character sets. Is there something I can do to make #2 easier? Maybe some rules about what to avoid on MySQL.
Rohit Gupta (2126 rep)
Mar 26, 2025, 07:59 AM • Last activity: Mar 27, 2025, 01:43 PM
2 votes
1 answers
1561 views
Adding data to a DB as migrations
**Some Background** We currently have a *PostgreSQL* DB that is "read-only", i.e. no user is allowed to write to it and all insertions are done in a controlled manner. For a while, these were done by manually adding this data or creating a copy of the DB locally and using `pgdump` to put it into a p...
**Some Background** We currently have a *PostgreSQL* DB that is "read-only", i.e. no user is allowed to write to it and all insertions are done in a controlled manner. For a while, these were done by manually adding this data or creating a copy of the DB locally and using pgdump to put it into a production environment. There were obvious limitations with that and since there was the need for versioning the DB too (i.e. an addition to the contents or change of structure means we need to update the version number) we decided to use [flyway](https://flywaydb.org/) . This was good since we would get a versioning "for free" by using the migrations. We were not only using Flyway to modify the structure of tables and DB but we've been using it for inserting new data too. **The Problem** The migration files are getting quite big and if we need to apply the migrations or something similar it is becoming quite a time-intensive process, especially if we apply from scratch. Since DB management/engineering is not my forte I've decided to reach out: **The Question** Are there best practices or a way to better handle this situation? I.e. structural changes AS WELL as data addition. My best guess was to use the migrations files only for structural changes and have the data into files (CSV or other) that get added programmatically after the migrations run. But this is ok for the first time, after that the file would be obsolete and subsequent files would be used for new data that would fit possible structural changes...
fditz (121 rep)
Jan 13, 2022, 12:20 PM • Last activity: Feb 8, 2025, 05:05 PM
0 votes
0 answers
27 views
Is this a good definition of schema migration?
I'm trying to preserve the meaning of the English word 'migration' in the definition, and I think I might be able to do that without making up a fake definition. Here goes: > A schema migration is the change of a database's schema from one **state** to another through time, as is distinct from the m...
I'm trying to preserve the meaning of the English word 'migration' in the definition, and I think I might be able to do that without making up a fake definition. Here goes: > A schema migration is the change of a database's schema from one **state** to another through time, as is distinct from the more general definition of data migration, in which data is moved from one **location** to another. Anyway, this is basically original research, because it's hard to find these kinds of definitions sometimes. So I'd really appreciate some expert feedback.
Joel (1 rep)
Feb 23, 2024, 09:46 PM
5 votes
2 answers
455 views
Can I change the owner of the cdc schema?
I've enabled Change Data Capture (CDC) on a database using `exec sys.sp_cdc_enable_db` This creates a new *cdc* schema which is owned by the *cdc* user ```sql select s.* , dp.name from sys.schemas as s join sys.database_principals as dp on dp.principal_id = s.principal_id ``` [![enter image descript...
I've enabled Change Data Capture (CDC) on a database using exec sys.sp_cdc_enable_db This creates a new *cdc* schema which is owned by the *cdc* user
select
	s.*
    , dp.name
from sys.schemas as s
join sys.database_principals as dp
    on dp.principal_id = s.principal_id
enter image description here Because I want to leverage ownership chaining and read from this schema using a proc, can I change the owner to *dbo*?
ALTER AUTHORIZATION ON SCHEMA::cdc TO dbo
This command works and changes the owner. But the CDC has been enabled for a while, and I'm worried about breaking something further down the road.
Zikato (5724 rep)
Sep 8, 2023, 09:55 AM • Last activity: Sep 8, 2023, 05:50 PM
17 votes
2 answers
6057 views
Best practices for keeping protobuf spec in sync with relational database (e.g. PostgreSQL)
**1.** We have a schema definition in Google's Protocol Buffers format that we use to generate code for different gRPC implementations (Python, PHP) **2.** We have a database (PostreSQL) that is initialized from SQL files which map the Protocol Buffers structure. **Problem:** Frequently, the protobu...
**1.** We have a schema definition in Google's Protocol Buffers format that we use to generate code for different gRPC implementations (Python, PHP) **2.** We have a database (PostreSQL) that is initialized from SQL files which map the Protocol Buffers structure. **Problem:** Frequently, the protobuf spec changes. This must be reflected in the database, which is a tedious task and prone to mistakes when done manually. We recently moved to Alembic Migrations for migrating continuous updates to the (live) database structures when we update the Protocol Buffers specification. However, we still feel like we need to maintain two structure that are basically equal or at least very similar: ProtoBuf and SQL-Code (managed via Alembic Migrations or in plain *.sql files). I have searched over and over and there is just very little information on how others solve this problem. Ideally, we would like to set up a CI/CD process that converts the protobuf specification (e.g. a message name changed from teaspoon to knife) to an SQL Schema Definition (e.g. CREATE TABLE teaspoon (...);) or creates SQL Migrations Code from it (e.g. ALTER TABLE teaspoon RENAME TO knife;). I know it is more complicated in reality than with my simple examples, but I can't even find code that supports half-automated and human supervised conversion. Any advice? What I have found so far: - from SQL files, we can create a PostgreSQL DB (e.g. in Docker) and use [sqlacodegen](https://github.com/agronholm/sqlacodegen) to automatically create an SQLAlchemy Model - from the SQLAlchemy Model, we can use Alembic Migrations to create (half)automatic Schema Migration Code - we're back to point zero: we need to maintain the base SQL files and protobuf spec - there is [Mercator](https://mercator.readthedocs.io/en/latest/sqlalchemy-orm-support.html) , but as far as I understand it can only map from SQL to gRPC, not vice versa - there's also [migra](https://github.com/djrobstep/migra) , which looks excellent for generating auto-diffs for different PostgreSQL schemas (here's a [presentation](https://djrobstep.com/talks/your-migrations-are-bad-and-you-should-feel-bad) and the [docs](https://djrobstep.com/docs/migra/quickstart)) Maybe there could be something like "Protobuf Migrations", where one can define SQL Migrations for each protobuf spec change?
Alex (413 rep)
Jan 17, 2020, 01:26 PM • Last activity: Aug 31, 2023, 01:45 PM
1 votes
1 answers
259 views
T-SQL schema changes that can be performed in O(1)
We want to minimize the downtime for our users during the upgrade process and want to ensure that our schema changes can all be performed in O(1) and are not O(n). Is there a list of documented schema changes that can be performed in O(1) with Microsoft SQL Server? E.g. adding a nullable column is O...
We want to minimize the downtime for our users during the upgrade process and want to ensure that our schema changes can all be performed in O(1) and are not O(n). Is there a list of documented schema changes that can be performed in O(1) with Microsoft SQL Server? E.g. adding a nullable column is O(1) - at least our empiric tests show that.
D.R. (343 rep)
May 23, 2023, 09:13 AM • Last activity: May 23, 2023, 12:13 PM
1 votes
1 answers
392 views
Migrate MyISAM tables to InnoDB using replication to avoid a long down time
We have to migrate MyISAM tables to InnoDB but we are afread of a long down time. In our database we have 10 MyISAM tables where 2 of them are the biggest: ~26GB data + ~10GB indexes. **My question is**: Can I set replication between primary server (with original engine's tables) and a replica where...
We have to migrate MyISAM tables to InnoDB but we are afread of a long down time. In our database we have 10 MyISAM tables where 2 of them are the biggest: ~26GB data + ~10GB indexes. **My question is**: Can I set replication between primary server (with original engine's tables) and a replica where all tables were migrated to InnoDB? Then replica will be our main DB. I read a few articles about migrating tables from MyISAM to InnoDB (like: http://mysql.rjweb.org/doc.php/myisam2innodb or https://dev.mysql.com/doc/refman/5.7/en/converting-tables-to-innodb.html) and additional steps are required (like removing duplicated indexes and server configuration changes) but Will it work?
Ela (500 rep)
Jan 14, 2023, 02:39 PM • Last activity: Jan 14, 2023, 08:12 PM
5 votes
3 answers
2471 views
Migrating existing tables to use INT primary key instead of UUID mysql v8
I'd love to hear everyone's opinion on this issue. I currently use Innodb mysql v8 in RDS. Our db is moderate, with ~130 tables, most are small, with our largest table having ~30 million rows over 6 years. We've currently scaled our RDS instant vertically way larger than our database should need and...
I'd love to hear everyone's opinion on this issue. I currently use Innodb mysql v8 in RDS. Our db is moderate, with ~130 tables, most are small, with our largest table having ~30 million rows over 6 years. We've currently scaled our RDS instant vertically way larger than our database should need and we're still CPU bound and can't use replicas because the slave gets too far behind. I'm convinced it's because we have UUID version 1 as our primary keys for our tables and have to do lots of string comparisons for our application. What I'd like to do is prove it by creating a subset of tables that use an auto-incrementing INT as the primary key and foreign keys but I'm struggling to see how this could work. If I just add an auto-incrementing column to the current tables, the queries that our ORM uses won't include it so it wouldn't be that much of a gain. What I am wondering is how I could use a transition table to do essentially swap out the values from UUIDs to INT. For instance, an ORM query now says something like: select id_primary_key from table_old where id_foreign_key in (UUID1, UUID2, UUID3) with an output of :1d3sed4a-5812-a35c-0204-515f6at42b46 If I create a transition table that has auto_inc INT, and old UUID, what would be the next step so that when the ORM runs that query, it queries against INTs instead of UUIDs? select id_primary_key from table_new where id_foreign_key in (1, 2, 3) output: 58743 Any thoughts? UPDATE-- It looks like we're going to be looking into more of a global model in the future, where have multiple dbs and as such will keep the UUIDs. We'll try to go the sharding route as well as trying to store them as VARBINARY(16) and using UUIDtoBIN() w/ the swap flag set to true because we're using UUID v1. Thank you all for your answers and time! I learned a lot from testing it.
Muab Nhoj (427 rep)
Nov 16, 2022, 02:14 PM • Last activity: Nov 18, 2022, 12:54 PM
0 votes
1 answers
353 views
How to speed up Oracle to PostgreSQL data transfer
I have an Oracle schema that I am transferring to PostgreSQL with ogr2ogr. This particular table has 5 columns with number(22) each. There are some 300M rows. In under 10 minutes I can transfer this table between two oracle servers over the network. `ogr2ogr` took a day to transfer 10M records and w...
I have an Oracle schema that I am transferring to PostgreSQL with ogr2ogr. This particular table has 5 columns with number(22) each. There are some 300M rows. In under 10 minutes I can transfer this table between two oracle servers over the network. ogr2ogr took a day to transfer 10M records and will take a month to transfer the whole table. I suspect the problem is on the receiving end. Data is transferred to AWS PgSQL with 4 Gb ram. How to debug the bottleneck and speed up the transfer?
Stepan (213 rep)
Jul 26, 2019, 02:14 PM • Last activity: Apr 11, 2022, 11:01 AM
0 votes
2 answers
1092 views
Error when migrating MySQL 8 to SQL Server 2019
I get an error while migrating a database from MySQL 8.0 to SQL Server 2019 using the SQL Server Migration Assistant. The error message: >ExecuteReader requires an open and available Connection. The connection's current state is closed. The same error happens on 95% of my tables, and occurred after...
I get an error while migrating a database from MySQL 8.0 to SQL Server 2019 using the SQL Server Migration Assistant. The error message: >ExecuteReader requires an open and available Connection. The connection's current state is closed. The same error happens on 95% of my tables, and occurred after I created a schema and ran the migrate data process. Any idea for this?
Carlos Vaz (11 rep)
Jan 10, 2022, 04:08 PM • Last activity: Feb 13, 2022, 01:45 PM
0 votes
0 answers
100 views
Horizontal scaling with database schema
I am designing a database schema for our customers on-premise, which will be moved to Azure PaaS later. Regarding the "scale out" strategies I lean towards to create "one database", and create schema for each and every customer to store the database objects such as tables, procedures, and views. Alt...
I am designing a database schema for our customers on-premise, which will be moved to Azure PaaS later. Regarding the "scale out" strategies I lean towards to create "one database", and create schema for each and every customer to store the database objects such as tables, procedures, and views. Although, I find little to no information how to keep track of schema changes in each and every schema, and put a version number on each. Tools such as Liquibase or Fluentmigrator has limited support for such it seems, as they seems to be better for a single database. What are the best practices to version, and manage database schemas in a database?
Avi (265 rep)
Jan 17, 2022, 06:15 PM
0 votes
1 answers
1817 views
Achieving best practice schema and permissions management on PostgreSQL
I have a PostgreSQL database (12, moving to 13 imminently) with a single (Rails) application running on it, and no other users or systems connecting. The application uses the postgres user to connect to the database for all purposes - regular querying, creating and altering tables/index etc. - and t...
I have a PostgreSQL database (12, moving to 13 imminently) with a single (Rails) application running on it, and no other users or systems connecting. The application uses the postgres user to connect to the database for all purposes - regular querying, creating and altering tables/index etc. - and the application's objects are all in the PUBLIC schema. I want to allow (human) users to connect to this database to run data analysis queries against the application data, and for those users to be read-only on some parts of the application schema, and have no other privileges on the instance (other than to connect of course). I think there are some "sub-optimal" practices already in place here, which might be common to inexperienced PostgreSQL users, and this question addresses what they are and how they should best be fixed. This seems like it might be a situation with many out-of-the-box Rails (perhaps others) apps. I think that these are the suboptimal practices. 1. I don't think the postgres user is appropriate for an application connection, as it is over-privileged for that. A dedicated application user would be better. 2. If I want to create read-only users with access to only some daatabase tables and columns, having the application in the PUBLIC schema is tricky. At the least, privileges need to be revoked, and then it is not really PUBLIC. That seems confusing, so a dedicated application schema would be better The plan I've come up with is to have: - a dedicated user for the application. - a single schema dedicated to the application. - no public schema - I don't see a need for it, and it not being actually public sounds confusing. - A read-only role that can be assigned to individual analyst users. Steps to migrate to it: 1. Rename the current PUBLIC schema to APP_SCHEMA (for example). 2. Create a new user APP_USER (for example). 3. Grant all on APP_SCHEMA to APP_USER. 4. Revoke any privileges that public has on APP_SCHEMA. 5. Change the search path to be APP_SCHEMA. 6. Create APP_READONLY_ROLE. 7. Grant select on required APP_SCHEMA tables/columns to APP_READONLY_ROLE 8. Create analyst users, assigning APP_READONLY_ROLE to them. (This is RDS, so another possibility is to add a APP_READONLY_USER and grant privileges to use it through IAM). So two questions I guess: Does that plan seem sensible? Do those migration steps get there?
David Aldridge (2198 rep)
Nov 14, 2021, 10:33 AM • Last activity: Nov 22, 2021, 08:58 AM
0 votes
1 answers
94 views
Restructuring table - start with PK or not?
I am restructuring a bunch of tables in my database (in this case using a smaller data type for some columns - in most cases cutting those by ~50%, and they tend to be *large* columns). I've gone through the process of creating a new table with the wanted structure, copying all the data over, droppi...
I am restructuring a bunch of tables in my database (in this case using a smaller data type for some columns - in most cases cutting those by ~50%, and they tend to be *large* columns). I've gone through the process of creating a new table with the wanted structure, copying all the data over, dropping/renaming old table and then applying all constraints, indexes, reseeding etc... Should I create the new table with the primary key (clustered index) or should I be creating it without? My understanding is that without means faster copying at the cost of restructuring later on, but with means no restructuring later on. Considering that the data source should already be sorted by the same PK (the PK doesn't usually change type on these tables), what's the best option? Data sizes vary between the different tables - some have a few million rows, most have far less. These are hosted on AWS on db.t2.medium / db.m4.large instances. The data is copied with insert... select. Recovery model of the DB is FULL.
Oded (334 rep)
Oct 18, 2019, 08:20 AM • Last activity: Oct 24, 2021, 08:30 AM
1 votes
1 answers
1053 views
Error: Not run migration 1604169742656_add-users-table is preceding already run migration 1604024614269_table-comments
I was trying to run a migration file to create a table of users but I kept getting the following error: Error: Not run migration 1604169742656_add-users-table is preceding already run migration 1604024614269_table-comments I worked around it by creating a new database as I was pressed for time, but...
I was trying to run a migration file to create a table of users but I kept getting the following error: Error: Not run migration 1604169742656_add-users-table is preceding already run migration 1604024614269_table-comments I worked around it by creating a new database as I was pressed for time, but does anyone know what the interpretation of this error is? Did I need to first perform a drop of the current table? This is the migration I was trying to run: exports.up = pgm => { pgm.sql(` CREATE TABLE users ( id SERIAL PRIMARY KEY, created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, bio VARCHAR(400), username VARCHAR(30) NOT NULL ); `); };
Daniel (281 rep)
Oct 31, 2020, 06:53 PM • Last activity: Sep 20, 2021, 02:58 PM
0 votes
1 answers
104 views
Table design comment_seen vs comment_unseen — migrate data from one to another
I have system that allows [comments posted by users and tracks which users seen which comments][1]. The read/unread state is tracked within `comments_seen` table. Surely enough, it grow tremendously. My goal is to move from `comments_seen` table to `comments_unseen`. Read/Unread status is only track...
I have system that allows comments posted by users and tracks which users seen which comments . The read/unread state is tracked within comments_seen table. Surely enough, it grow tremendously. My goal is to move from comments_seen table to comments_unseen. Read/Unread status is only tracked for comment author and users in admin group (eg. admins see all new comment on site and should react respectively, while users only see answers to their comments or comments to their orders). Current comments_seen table is almost 36M rows, while it's current definition is as simple as
CREATE TABLE comments_seen
(
    user_id    int(10) NOT NULL,
    comment_id int(10) NOT NULL,
    CONSTRAINT user_id
        UNIQUE (user_id, comment_id)
)
The comments_unseen table could be very much the same. What would be your suggestions on my steps to moving from one table design to another? Any triggers to keep data consistent? Any deployment/migration steps? All queries in code could be rewritten at once, I guess, so time to support both tables can be minimized to, say, seconds or minutes. The overall application load isn't very high and everything could be done in night times for most users to not affect performance or UX. Any thought, suggestions and shared experience is very much appreciated. The DB is MySQL 5.5 **UPDATE 2021-04-01**: As pointed out in comments 35M rows isn't huge, but the culprit must be the query. So here's the slowest query (as reported by Percona's pt-query-digest ):
SELECT
			SUM(is_deleted IS NOT TRUE) AS total_count
			, SUM(parent_id = 0) AS branch_count
			, SUM(parent_id = 0 AND is_can_answer) AS opened_branch_count
			, SUM(parent_id = 0 AND NOT is_can_answer) AS closed_branch_count
			, SUM(cs.comment_id IS NULL AND is_deleted IS NOT TRUE) AS new_count , SUM(cs.comment_id IS NULL AND is_deleted IS NOT TRUE AND module = 5) AS order_comment_count , SUM(cs.comment_id IS NULL AND is_deleted IS NOT TRUE AND module = 4) AS items_comment_count 
		FROM (
                            (
								SELECT DISTINCT
									5 AS module
									, 'm' AS item_type
									, co.id AS item_id
								FROM cart_orders AS co
								WHERE co.author = 65
								OR co.manager_id = 65
								
							)
							UNION
							(
								SELECT DISTINCT
									5 AS module
									, 'i' AS item_type
									, pr.id AS item_id
								FROM cart_orders AS co
									JOIN cart_package AS cp
										ON cp.order_id = co.id
									JOIN pictures_relations AS pr
										ON pr.module_id = 5
											AND pr.item_id = cp.id
											AND pr.item_type = 'opkg'
											AND pr.is_file IS FALSE
								WHERE co.author = 65
								OR co.manager_id = 65
								
							)
							UNION
							(
								SELECT DISTINCT
									4 AS module
									, 'm' AS item_type
									, cat.id AS item_id
								FROM catalog AS cat
									INNER JOIN (
										cart_positions AS cp
										JOIN cart_orders AS co
											ON cp.order_id = co.id
									)
										ON cp.cat_id = cat.id
								WHERE co.author = 65
								OR co.manager_id = 65
								
							)
							) AS source JOIN comments AS a USING (module, item_id, item_type) 
			LEFT JOIN comments_seen AS cs ON cs.comment_id = a.id AND cs.user_id = 65 
		WHERE ((TRUE))
Execution time for the whole query, and it's UNIONed parts:
complete - 8s 451ms

1 - 775ms
2 - 1s 214ms
3 - 3s 623ms

2+3 - 4s 891ms
1+3 - 4s 314ms
1+2 - 1s 945ms
Clearly, there's only single JOIN by both comment_id and user_id columns, so the issue seem to be not in the comments_seen table as I initially thought, but rather in overall poor table design.
Eduard Sukharev (129 rep)
Mar 31, 2021, 06:28 PM • Last activity: Apr 1, 2021, 06:30 PM
2 votes
3 answers
814 views
Is it good practice to refer to a table with different names in a database schema?
We have a database currently running with a table named a certain way (let's say `thing` for the sake of example), and references to that table named the same way (FK would be named `thing_id` for instance). We also used to have this `thing` concept in the code at some point, but then it was decided...
We have a database currently running with a table named a certain way (let's say thing for the sake of example), and references to that table named the same way (FK would be named thing_id for instance). We also used to have this thing concept in the code at some point, but then it was decided for various reasons to rename it (let's say to stuff), which was done over time. Now, the code no longer mentions thing except in SQL queries. We unfortunately can't easily rename the table or columns since they're being heavily used, and we were wondering how to deal with new tables referring to this concept of thing/stuff. * Some of the team think that the schema should be the most up-to-date as possible, and that we should use the new naming (stuff) everywhere new (in join table names and FK names referring to thing) * Others think that the database schema should always stay consistent, and thus we should keep using the old naming (thing) anywhere, and handle the renaming in the code itself We're trying to understand what are best practices here and what is advised to do in this situation where the DB schema and the code naming diverge?
Aweb (21 rep)
Feb 10, 2021, 04:26 PM • Last activity: Feb 12, 2021, 12:39 AM
0 votes
1 answers
153 views
Deploying Oracle ADW Changes from development environment to production
I'm currently running an Oracle Autonomous Data Warehouse (Oracle's cloud-based database offering) on version 19c. I have a development database on the Always Free Tier and my production database is a paid instance (with backups etc). I will likely also be making a staging instance soon. When making...
I'm currently running an Oracle Autonomous Data Warehouse (Oracle's cloud-based database offering) on version 19c. I have a development database on the Always Free Tier and my production database is a paid instance (with backups etc). I will likely also be making a staging instance soon. When making changes to the database schema we have been making changes to my database schema as follows: 1. Change schema on the development environment using SQL statements (CREATE, ALTER, etc) 2. Test the development environment manually 3. Change the schema on the production environment using identical SQL statements 4. Test the production environment to make sure nothing is broken I am aware that this is not the way it should be done, we need to implement some CI/CD to streamline these changes but we've only recently come out of our piloting phase where it was a rush to get the product working etc, I'm very keen to firm up our new systems. How I'm imagining it'd go (changes in bold): 1. Change schema on the development environment using SQL statements (CREATE, ALTER, etc) 2. Test the development environment with automated tests 3. Apply some tool to apply the same changes to my production system 4. Test the production environment to make sure nothing is broken I have seen that tools exist for other SQL database tools, such as MySQL, PostgreSQL etc but from my searches on Oracle-specific tools, I'm not sure if where to find a tool that can propagate the schema changes from our development database to a production database. If you know of any tools that I can look at to do this functionality then I'd appreciate a pointer on where to look! I'm aware this is similar to other questions, the ones I've looked at are: - https://dba.stackexchange.com/questions/61688/altering-db-schema-in-production-environment - https://dba.stackexchange.com/questions/28478/upgrade-production-database-during-software-development-lifecycle Both of these questions were asked several years ago and so I'm not sure if new things have come up recently.
Ash Oldershaw (121 rep)
Jun 30, 2020, 01:12 PM • Last activity: Jun 30, 2020, 03:14 PM
0 votes
1 answers
5778 views
MariaDB Replication - Replicate only specific tables and views
Note: A backend developer here with little to no experience in setting up database servers and replication. **What needs to be done** Setup DB replication of an existing database with the following constraints: 1. Replicate **only a specific list** of tables/views with different names in the replica...
Note: A backend developer here with little to no experience in setting up database servers and replication. **What needs to be done** Setup DB replication of an existing database with the following constraints: 1. Replicate **only a specific list** of tables/views with different names in the replicated database. 2. Change the name of the tables/views in the replicated database (during the replication process) 3. Setup a user on the replicated DB with further restrictions with which only a set of table/view can be viewed/updated/deleted **Progress so far** I have already read the document here , however, I did not find anything concrete to help me move forward with all the use-cases I wish to support! **Use Case** Show only essential data to the external vendor. PS: If there are any other approaches other than replication, would be happy to consider and implement that as well.
qwertynik (3 rep)
May 30, 2020, 09:34 AM • Last activity: May 30, 2020, 08:14 PM
Showing page 1 of 20 total questions