Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
146
views
SQL Server instance upgrade impacting on other instance maintenance plan jobs
I have two instances on a single server. Instances are named TEST and LIVE for simplicity. Both are running SQL Server 2019. Only the LIVE instance has SQL Agent jobs running (Basic backup jobs via maintenance plans). SSIS is not installed on this environment. I have just upgraded the TEST instance...
I have two instances on a single server. Instances are named TEST and LIVE for simplicity. Both are running SQL Server 2019. Only the LIVE instance has SQL Agent jobs running (Basic backup jobs via maintenance plans). SSIS is not installed on this environment.
I have just upgraded the TEST instance to SQL Server 2022. All was quite straight forward except for the ODBC and OLEDB components having to be uninstalled for it to be able to upgrade correctly. All looks okay now on the TEST instance.
However there is an issue with the LIVE instance which is still running SQL Server 2019. The backup jobs are failing with:
Could not load package "Maintenance Plans\Backup" because of error 0x80070002.
Description: The system cannot find the file specified.
I have determined that the issue is with the DTExec component by running the backup code, from the SQLAgent job, via the command prompt. If I attempt to run it with the 150\DTS\Binn version (i.e. The SQL Server 2019 version) it fails. However when I run it with the 160\DTS\Binn (i.e. the newly installed SQL Server 2022 version) it works.
I would assumed that if anything it would be the other way around.
I cannot see any changes to files within the SQL Server 2019 folders. I am guessing some shared component has altered and is causing this but I cannot track it down.
I am quite sure that when I upgrade the LIVE instance then this problem will be resolved but I would like to know why this upgrade has caused an issue. If there were multiple instance on a server then this could be much more of a problem.
I have tried all the obvious sources of information (Including ChatGPT) for help but nothing is telling me why this is happening.
Does anyone have an idea?
Additional information: Reinstalling the old drivers makes no difference. I get the same results.
StuKay
(101 rep)
Oct 23, 2024, 03:58 PM
• Last activity: Jul 25, 2025, 07:03 PM
0
votes
1
answers
141
views
Can I downscale/upscale my MySQL server automatically?
Is this doable? Since we know the peak times of our platform we are planning to downscale our DB after peak hours then upgrade the specs before the start of peak hours, how can we do this? Like how is RDS doing this?
Is this doable? Since we know the peak times of our platform we are planning to downscale our DB after peak hours then upgrade the specs before the start of peak hours, how can we do this? Like how is RDS doing this?
JRA
(137 rep)
Feb 26, 2021, 09:34 AM
• Last activity: Jul 23, 2025, 08:03 AM
1
votes
2
answers
149
views
Upgrade BI server to 2014 and stay on 2008r2 with the database engine server
We would be happy to upgrade our SQL Server environments to 2014, but we have a tight budget for next year. Our BI server (SQL Server 2008 Standard Edition, that is used only for BI purposes) is located separately from the Production Database server (SQL Server 2008 Enterprise Edition). The BI serve...
We would be happy to upgrade our SQL Server environments to 2014, but we have a tight budget for next year.
Our BI server (SQL Server 2008 Standard Edition, that is used only for BI purposes) is located separately from the Production Database server (SQL Server 2008 Enterprise Edition). The BI server contains our SSIS and SSRS packages and reports.
I was wandering whether it's possible to have a configuration where the BI server will have SQL Server 2014 (Standard of Business Intelligence) but the database server itself will remain on 2008.
Thanks for your answers,
Roni.
Roni Vered
(585 rep)
Dec 10, 2014, 04:18 PM
• Last activity: Jul 15, 2025, 08:06 AM
0
votes
1
answers
173
views
alter table which is under the load (sql server)
we have table that being used for sending sms and other notification for confirmation as second factor. Some changes in business logic requires us to alter table to make some columns NOT NULL. Our stumbling-stone is that table should be accessed during 24/7, application writes to and reads from it p...
we have table that being used for sending sms and other notification for confirmation as second factor.
Some changes in business logic requires us to alter table to make some columns NOT NULL.
Our stumbling-stone is that table should be accessed during 24/7, application writes to and reads from it pretty frequently and table has tens of millions rows. And when command like
alter table NOTIFICATION_TABLE
alter column C1 int null
starting to execute it obviously tries to lock table and seemingly cannot do it because table is being used by other requests. That situation end up with that
alter
command hangs out and nothing happened during 5 minutes. After that timeout we stop it because do not want occasionally crush our system.
Any thoughts about how we can implement this modification without stopping our system?
DotNetter
(101 rep)
May 1, 2017, 04:12 PM
• Last activity: Jul 15, 2025, 07:06 AM
0
votes
1
answers
164
views
How do I upgrade a DataStax Enterprise cluster to 6.8.25 with OpsCenter Lifecycle Manager?
What are the steps (or all the commands) to upgrade (just the 6.8.25 fix, I have 6.18.16 version now) all nodes in cluster with LifeCycle Manager? I've found this but it refers to 6.5 version of DSE: https://docs.datastax.com/eol/en/opscenter/6.5/opsc/LCM/upgradeDSEjob.html
What are the steps (or all the commands) to upgrade (just the 6.8.25 fix, I have 6.18.16 version now) all nodes in cluster with LifeCycle Manager?
I've found this but it refers to 6.5 version of DSE:
https://docs.datastax.com/eol/en/opscenter/6.5/opsc/LCM/upgradeDSEjob.html
dba_db2_dse
(21 rep)
Mar 6, 2023, 07:10 AM
• Last activity: Jul 5, 2025, 12:00 PM
0
votes
2
answers
128
views
Can I choose which PostgreSQL version is used by barman?
I installed `barman` on the same server as my production database. Perhaps not a good choice but it is that way. I use `barman` to make backups of the production database and of databases on other servers. Some of those databases are for development. When there is a new version of PostgreSQL I first...
I installed
barman
on the same server as my production database. Perhaps not a good choice but it is that way. I use barman
to make backups of the production database and of databases on other servers. Some of those databases are for development.
When there is a new version of PostgreSQL I first migrate the development databases to test the upgrade and some applications. When this is done, I do the production database as the last one.
During this time I want to continue to do backups of all of my databases. This is only possible if barman
uses the new version. I thought that I could resolve this by installing the new version on the server that runs barman
. I can see that after the installation that the psql
used is from that version. But barman
says that it uses the version from the production database.
Is there a way to tell barman
which PostgreSQL version it should use?
Marco
(3720 rep)
Feb 2, 2024, 02:14 PM
• Last activity: Jul 4, 2025, 11:37 AM
0
votes
1
answers
170
views
Upgrading MySQL from 5.7 to 8.0
I need to upgrade one of my servers, running mysql 5.7. The DB and the OS is on different drives on my machine. Can I drop the DB's from my 5.7 installation, reinstall the server with new OS and mysql 8.0, and then after the fact just reattach the DB's? I have been running a mysqldump but it is very...
I need to upgrade one of my servers, running mysql 5.7. The DB and the OS is on different drives on my machine.
Can I drop the DB's from my 5.7 installation, reinstall the server with new OS and mysql 8.0, and then after the fact just reattach the DB's?
I have been running a mysqldump but it is very slow.
I am a mysql n00b btw.
Any advise will be appreciated.
Still trying to figure out what my steps are
user23324007
(1 rep)
Jan 31, 2024, 04:53 AM
• Last activity: Jul 3, 2025, 05:05 PM
2
votes
2
answers
508
views
MySQL Upgrade on Windows
I am having MySQL 5.1 installed on windows box. Now I would like to upgrade to latest MySQL 5.6 version. I am planning to install MySQL 5.6 in the same server with same port 3306. After installation, I will map the new installation data directory to the older data directory which is used by MySQL 5....
I am having MySQL 5.1 installed on windows box. Now I would like to upgrade to latest MySQL 5.6 version.
I am planning to install MySQL 5.6 in the same server with same port 3306. After installation, I will map the new installation data directory to the older data directory which is used by MySQL 5.1.
After this, I will upgrade using
mysql_upgrade.exe
.
Will this plan work out or not ? Also let me know if any precautions to be taken care.
Edit: This is production server and the DB is of huge size 100 GB, so we are not able to go with backup and restore mechanism which is time consuming process.
Phanindra
(1007 rep)
Jun 17, 2015, 01:49 PM
• Last activity: Jul 3, 2025, 11:02 AM
5
votes
1
answers
2634
views
Replication stops with GTID_NEXT error after creation/drop of memory table in mysql5.6
We have recently upgraded to mysql5.6.25 from mysql5.5.x/mysql5.1.x on our mysql-cluster. Below is a brief snapshot of our architecture. [![enter image description here][1]][1] [1]: https://i.sstatic.net/QCakk.jpg Since we have upgraded and enabled gtid-mode we have been intermittently getting slave...
We have recently upgraded to mysql5.6.25 from mysql5.5.x/mysql5.1.x on our mysql-cluster.
Below is a brief snapshot of our architecture.
Since we have upgraded and enabled gtid-mode we have been intermittently getting slave errors similar to :
***Last_SQL_Error: Error 'When @@SESSION.GTID_NEXT is set to a GTID, you must explicitly set it to a different value after a COMMIT or ROLLBACK. Please check GTID_NEXT variable manual page for detailed explanation. Current @@SESSION.GTID_NEXT is 'd7e8990d-3a9e-11e5-8bc7-22000aa63d47:1466'.' on query. Default database: 'adplatform'. Query: 'create table X_new like X'***
Our observations are as below..
- These slave errors are resolved simply by restarting the slave.
- Such errors are always with Create/Drop of tables which have Memory Storage Engine.
- Errors on Complete-Slave(B) show up continuously at a fixed minute (39th) of the hour and have been repeating since we have upgraded, almost a week.
- Errors on Complete-Slave as well as Partial slave are observed whenever its master is restarted.
- Cluster-1 and Cluster-2 have centos machines and Cluster-3 have ubuntu-machines. Slaves on centos machines also fail with the same error whenever its master(C/D) is restarted, but slave on ubuntu machines do not fail!!.
We have temporarily been able to live with this issue by setting up an action-script on our monitoring system which fires on slave error alert on any machine.
A look into gtid_next section in replication-options doc of mysql tells following
> Prior to MySQL 5.6.20, when GTIDs were enabled but gtid_next was not
> AUTOMATIC, DROP TABLE did not work correctly when used on a
> combination of nontemporary tables with temporary tables, or of
> temporary tables using transactional storage engines with temporary
> tables using nontransactional storage engines. In MySQL 5.6.20 and
> later, DROP TABLE or DROP TEMPORARY TABLE fails with an explicit error
> when used with either of these combinations of tables. (Bug #17620053)
This seems related to my issue but still doesn't not explain my scenario.
Any hints/direction to solve the issue would be greatly appreciated...
**EDIT :**
I managed to find a similar recently reported bug in mysql(#77729), description of which is as follows :
https://bugs.mysql.com/bug.php?id=77729
> When you have table with Engine MEMORY working on replication master,
> mysqld injects "DELETE" statement in binary logs on first access query
> to this table. This insures consistency of data on replicating slaves.
>
> If replication is GTID ROW based, this inserted "DELETE" breaks
> replication. Logged event is in STATEMENT format and do not generate
> correct SET GTID_NEXT statements in binary log.
Unfortunately, the status of this bug is marked as

Can't Repeat
...
raman2887
(51 rep)
Aug 5, 2015, 08:05 AM
• Last activity: Jul 1, 2025, 11:02 PM
1
votes
2
answers
4621
views
Mysql upgrade 5.7.22 to 8.0, server not starting after upgrade
I followed the upgrade guide [Upgrading MySQL][1]. Already installed version of mysql is 5.7.22 Verified all Upgrade Prerequisites for MySQL 5.7 Installation SET GLOBAL innodb_fast_shutdown = 1; then after shutdown server using below command : mysqladmin -u root -p shutdown Steps followed while upgr...
I followed the upgrade guide Upgrading MySQL .
Already installed version of mysql is 5.7.22
Verified all Upgrade Prerequisites for MySQL 5.7 Installation
SET GLOBAL innodb_fast_shutdown = 1;
then after shutdown server using below command :
mysqladmin -u root -p shutdown
Steps followed while upgrading :
cd /tmp/ && wget https://dev.mysql.com/get/mysql-apt-config_0.8.10-1_all.deb
dpkg -i mysql-apt-config_0.8.10-1_all.deb
(Reading database ... 8438 files and directories currently installed.)
Preparing to unpack mysql-apt-config_0.8.10-1_all.deb ...
Unpacking mysql-apt-config (0.8.10-1) over (0.8.10-1) ...
Setting up mysql-apt-config (0.8.10-1) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
Configuring mysql-apt-config
----------------------------
MySQL APT Repo features MySQL Server along with a variety of MySQL components. You may select the appropriate product to choose the version
that you wish to receive.
Once you are satisfied with the configuration then select last option 'Ok' to save the configuration, then run 'apt-get update' to load
package list. Advanced users can always change the configurations later, depending on their own needs.
1. MySQL Server & Cluster (Currently selected: mysql-5.7) 3. MySQL Preview Packages (Currently selected: Disabled)
2. MySQL Tools & Connectors (Currently selected: Enabled) 4. Ok
Which MySQL product do you wish to configure? 1
This configuration program has determined that mysql-5.7 is configured on your system, and has highlighted the most appropriate repository
package. If you are not sure which version to install, do not change the auto-selected version. Advanced users can always change the version
as needed later. Note that MySQL Cluster also contains MySQL Server.
1. mysql-5.7 2. mysql-8.0 3. mysql-cluster-7.5 4. mysql-cluster-7.6 5. None
Which server version do you wish to receive? 2
MySQL APT Repo features MySQL Server along with a variety of MySQL components. You may select the appropriate product to choose the version
that you wish to receive.
Once you are satisfied with the configuration then select last option 'Ok' to save the configuration, then run 'apt-get update' to load
package list. Advanced users can always change the configurations later, depending on their own needs.
1. MySQL Server & Cluster (Currently selected: mysql-8.0) 3. MySQL Preview Packages (Currently selected: Disabled)
2. MySQL Tools & Connectors (Currently selected: Enabled) 4. Ok
Which MySQL product do you wish to configure? 4
OK
apt-get update
apt-get install mysql-server
Configuring mysql-community-server
----------------------------------
MySQL 8 uses a new authentication based on improved SHA256-based password methods. It is recommended that all new MySQL Server installations
use this method going forward. This new authentication plugin requires new versions of connectors and clients, with support for this new 8
authentication (caching_sha2_password). Currently MySQL 8 Connectors and community drivers built with libmysqlclient21 support this new
method. Clients built with older versions of libmysqlclient may not be able to connect to the new server.
To retain compatibility with older client software, the default authentication plugin can be set to the legacy value (mysql_native_password)
This should only be done if required third-party software has not been updated to work with the new authentication method. The change will be
written to the file /etc/mysql/mysql.conf.d/default-auth-override.cnf
After installation, the default can be changed by setting the default_authentication_plugin server setting.
1. Use Strong Password Encryption (RECOMMENDED) 2. Use Legacy Authentication Method (Retain MySQL 5.x Compatibility)
Select default authentication plugin 2
after successful installation trying to start mysql server using old mysql data dir. But its not starting server :
mysqld_safe --user=mysql --datadir=/var/lib/mysql
2018-07-12T06:46:26.378138Z mysqld_safe Logging to '/var/log/mysql/error.log'.
2018-07-12T06:46:26.391934Z mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
2018-07-12T06:46:26.396613Z mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
and there is no new error in mysql error.log :
2018-07-12T06:42:34.126621Z 0 [Note] Shutting down plugin 'INNODB_TRX'
2018-07-12T06:42:34.126624Z 0 [Note] Shutting down plugin 'InnoDB'
2018-07-12T06:42:34.126724Z 0 [Note] InnoDB: FTS optimize thread exiting.
2018-07-12T06:42:34.126835Z 0 [Note] InnoDB: Starting shutdown...
2018-07-12T06:42:34.227213Z 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
2018-07-12T06:42:34.227742Z 0 [Note] InnoDB: Buffer pool(s) dump completed at 180712 6:42:34
2018-07-12T06:42:35.902099Z 0 [Note] InnoDB: Shutdown completed; log sequence number 2589862
2018-07-12T06:42:35.903598Z 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1"
2018-07-12T06:42:35.903608Z 0 [Note] Shutting down plugin 'MyISAM'
2018-07-12T06:42:35.903616Z 0 [Note] Shutting down plugin 'sha256_password'
2018-07-12T06:42:35.903618Z 0 [Note] Shutting down plugin 'mysql_native_password'
2018-07-12T06:42:35.903719Z 0 [Note] Shutting down plugin 'binlog'
2018-07-12T06:42:35.903965Z 0 [Note] /usr/sbin/mysqld: Shutdown complete
2018-07-12T06:42:35.978195Z mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
rajesh
(119 rep)
Jul 12, 2018, 07:09 AM
• Last activity: Jun 28, 2025, 12:18 AM
0
votes
1
answers
4979
views
Drop tables but space not claimed in postgres 12
I have upgraded Postgresql 9.5 to Postgresql 12.4 a few days back using pg_upgrade utility with link (-k) option. So basically I am having two data directories i.e. One is old data directory (v9.5) and the current one in running state (v12.4). Yesterday I have dropped two tables of size 700GB and 30...
I have upgraded Postgresql 9.5 to Postgresql 12.4 a few days back using pg_upgrade utility with link (-k) option.
So basically I am having two data directories i.e. One is old data directory (v9.5) and the current one in running state (v12.4).
Yesterday I have dropped two tables of size 700GB and 300GB.
After connecting to postgres using psql utility I can see database size whose tables was dropped got decreased (with \l+ ) but what is making me worry is that only a few ~50GBs have been freed from storage partition.
I have run vacuumdb only on that database but no luck. I have checked if any deleted open file is there on OS level using lsof but there is none.
Note : Still I have not deleted old data directory (v9.5). I am not sure if deleting will impact my running postgres 12 as I have used 'link' option in pg_upgrade.
Looking for the solution.
erTugRul
(153 rep)
Nov 25, 2020, 01:31 AM
• Last activity: Jun 9, 2025, 06:09 AM
0
votes
1
answers
233
views
Mariadb not using indexes post upgrade to 10.5.23 from 10.2
We upgraded our MariaDB server from 10.2 to 10.5.23 and have found a significant CPU spike after this and almost most of the queries landing in the slow query logs. On investigation, we found that queries that were using the indexes on 10.2 were not getting used in 10.5 . Ive read a lot of material...
We upgraded our MariaDB server from 10.2 to 10.5.23 and have found a significant CPU spike after this and almost most of the queries landing in the slow query logs. On investigation, we found that queries that were using the indexes on 10.2 were not getting used in 10.5 .
Ive read a lot of material today where it was told that it may have been using the right indexes in 10.2 but 10.5 might NOT be using the same and hence the issue, which I fail to understand as its a straight forward index.
For e.g. this is the query with explain on 10.2
MariaDB [client_db_17843]> explain SELECT p0_.ID AS ID0, c1_.Name AS Name1, c1_.Surname AS Surname2, c1_.CompanyName AS CompanyName3, c1_.MainContact AS MainContact4, c1_.TableType AS TableType5, p0_.ParentPropertyID AS ParentPropertyID6, p0_.AddressLine1 AS AddressLine17, p0_.AddressLine2 AS AddressLine28, p0_.Postcode AS Postcode9, c2_.EmailAddress AS EmailAddress10, c3_.TelephoneNumber AS TelephoneNumber11, c3_.CountryCode AS CountryCode12, c4_.TelephoneNumber AS TelephoneNumber13, c4_.CountryCode AS CountryCode14, c5_.TelephoneNumber AS TelephoneNumber15, c5_.CountryCode AS CountryCode16, s6_.Description AS Description17 FROM Properties p0_ INNER JOIN Contacts c1_ ON (c1_.TableType = 2 AND c1_.TablePKID = p0_.ID AND c1_.Deleted = 0) LEFT JOIN ContactsTelephones c3_ ON (c3_.ContactID = c1_.ID AND c3_.Deleted = 0 AND c3_.Settings_TypesOfTelephoneNumberID = 2) LEFT JOIN ContactsTelephones c4_ ON (c4_.ContactID = c1_.ID AND c4_.Deleted = 0 AND c4_.Settings_TypesOfTelephoneNumberID = 1) LEFT JOIN ContactsTelephones c5_ ON (c5_.ContactID = c1_.ID AND c5_.Deleted = 0 AND c5_.Settings_TypesOfTelephoneNumberID 1 AND c5_.Settings_TypesOfTelephoneNumberID 2) LEFT JOIN ContactsEmails c2_ ON (c2_.ContactID = c1_.ID AND c2_.Deleted = 0) LEFT JOIN Settings_Titles s6_ ON (s6_.ID = c1_.Settings_TitleID) WHERE (c1_.MainContact = 1 OR c1_.MainContact = 0) AND p0_.Deleted = 0 GROUP BY p0_.ID ORDER BY p0_.ID DESC LIMIT 10 OFFSET 0;
+------+-------------+-------+--------+---------------------------------------------------------------------+----------------------+---------+--------------------------------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+--------+---------------------------------------------------------------------+----------------------+---------+--------------------------------------+------+-------------+
| 1 | SIMPLE | c1_ | index | IX_Contacts_idx | IX_Contacts_idx | 10 | NULL | 10 | Using where |
| 1 | SIMPLE | p0_ | eq_ref | PRIMARY | PRIMARY | 4 | client_db_17843.c1_.TablePKID | 1 | Using where |
| 1 | SIMPLE | c3_ | ref | IDX_5FD82FF8CDD4564D,IDX_5FD82FF8A9B070B8,ix_Contactstelephones_idx | IDX_5FD82FF8CDD4564D | 5 | client_db_17843.c1_.ID | 1 | Using where |
| 1 | SIMPLE | c4_ | ref | IDX_5FD82FF8CDD4564D,IDX_5FD82FF8A9B070B8,ix_Contactstelephones_idx | IDX_5FD82FF8CDD4564D | 5 | client_db_17843.c1_.ID | 1 | Using where |
| 1 | SIMPLE | c5_ | ref | IDX_5FD82FF8CDD4564D,IDX_5FD82FF8A9B070B8,ix_Contactstelephones_idx | IDX_5FD82FF8CDD4564D | 5 | client_db_17843.c1_.ID | 1 | Using where |
| 1 | SIMPLE | c2_ | ref | IDX_1EC6F1D5CDD4564D,ix_Contactsemails_idx | IDX_1EC6F1D5CDD4564D | 5 | client_db_17843.c1_.ID | 1 | Using where |
| 1 | SIMPLE | s6_ | eq_ref | PRIMARY | PRIMARY | 4 | client_db_17843.c1_.Settings_TitleID | 1 | Using where |
+------+-------------+-------+--------+---------------------------------------------------------------------+----------------------+---------+--------------------------------------+------+-------------+
7 rows in set (0.00 sec)
The same on 10.5 does this
MariaDB [client_db_17843]> explain SELECT p0_.ID AS ID0, c1_.Name AS Name1, c1_.Surname AS Surname2, c1_.CompanyName AS CompanyName3, c1_.MainContact AS MainContact4, c1_.TableType AS TableType5, p0_.ParentPropertyID AS ParentPropertyID6, p0_.AddressLine1 AS AddressLine17, p0_.AddressLine2 AS AddressLine28, p0_.Postcode AS Postcode9, c2_.EmailAddress AS EmailAddress10, c3_.TelephoneNumber AS TelephoneNumber11, c3_.CountryCode AS CountryCode12, c4_.TelephoneNumber AS TelephoneNumber13, c4_.CountryCode AS CountryCode14, c5_.TelephoneNumber AS TelephoneNumber15, c5_.CountryCode AS CountryCode16, s6_.Description AS Description17 FROM Properties p0_ INNER JOIN Contacts c1_ ON (c1_.TableType = 2 AND c1_.TablePKID = p0_.ID AND c1_.Deleted = 0) LEFT JOIN ContactsTelephones c3_ ON (c3_.ContactID = c1_.ID AND c3_.Deleted = 0 AND c3_.Settings_TypesOfTelephoneNumberID = 2) LEFT JOIN ContactsTelephones c4_ ON (c4_.ContactID = c1_.ID AND c4_.Deleted = 0 AND c4_.Settings_TypesOfTelephoneNumberID = 1) LEFT JOIN ContactsTelephones c5_ ON (c5_.ContactID = c1_.ID AND c5_.Deleted = 0 AND c5_.Settings_TypesOfTelephoneNumberID 1 AND c5_.Settings_TypesOfTelephoneNumberID 2) LEFT JOIN ContactsEmails c2_ ON (c2_.ContactID = c1_.ID AND c2_.Deleted = 0) LEFT JOIN Settings_Titles s6_ ON (s6_.ID = c1_.Settings_TitleID) WHERE (c1_.MainContact = 1 OR c1_.MainContact = 0) AND p0_.Deleted = 0 GROUP BY p0_.ID ORDER BY p0_.ID DESC LIMIT 10 OFFSET 0;
+------+-------------+-------+------------+---------------------------------------------------------------------+-------------------------------------------+---------+-------------------------------+--------+-------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+------------+---------------------------------------------------------------------+-------------------------------------------+---------+-------------------------------+--------+-------------------------------------------------+
| 1 | SIMPLE | c1_ | ALL | IX_Contacts_idx | NULL | NULL | NULL | 159293 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | p0_ | eq_ref | PRIMARY | PRIMARY | 4 | client_db_17843.c1_.TablePKID | 1 | Using where |
| 1 | SIMPLE | c3_ | ref | IDX_5FD82FF8CDD4564D,IDX_5FD82FF8A9B070B8,ix_Contactstelephones_idx | IDX_5FD82FF8CDD4564D | 5 | client_db_17843.c1_.ID | 1 | Using where |
| 1 | SIMPLE | c4_ | ref | IDX_5FD82FF8CDD4564D,IDX_5FD82FF8A9B070B8,ix_Contactstelephones_idx | IDX_5FD82FF8CDD4564D | 5 | client_db_17843.c1_.ID | 1 | Using where |
| 1 | SIMPLE | c5_ | ref|filter | IDX_5FD82FF8CDD4564D,IDX_5FD82FF8A9B070B8,ix_Contactstelephones_idx | IDX_5FD82FF8CDD4564D|IDX_5FD82FF8A9B070B8 | 5|5 | client_db_17843.c1_.ID | 1 (0%) | Using where; Using rowid filter |
| 1 | SIMPLE | c2_ | ref | IDX_1EC6F1D5CDD4564D,ix_Contactsemails_idx | IDX_1EC6F1D5CDD4564D | 5 | client_db_17843.c1_.ID | 1 | Using where |
| 1 | SIMPLE | s6_ | ALL | PRIMARY | NULL | NULL | NULL | 11 | Using where; Using join buffer (flat, BNL join) |
+------+-------------+-------+------------+---------------------------------------------------------------------+-------------------------------------------+---------+-------------------------------+--------+-------------------------------------------------+
7 rows in set (0.001 sec)
Its doing a full table scan on the Contacts
table even though the index that it suggested IX_Contacts_idx does have the right columns that are in the ON condition in the query.
We have read through a few bits of documentation but unsure if there are any mariadb optimizer specific hints that we are missing on 10.5.x.
Any pointers will be greatly appreciated as we upgraded it and its been very bad post the upgrade.
Operation System: Rocky Linux 8.8
MariaDB Version: 10.5.23
CPU: 52 core (WE have increased this from 20 core this morning since the box was getting pegged a lot and had to increase to sustain the load)
Memory: 64GB
All tables are INNODB
innodb_params:
MariaDB [client_db_17843]> show variables like 'innodb_buffer%';
+-------------------------------------+----------------+
| Variable_name | Value |
+-------------------------------------+----------------+
| innodb_buffer_pool_chunk_size | 134217728 |
| innodb_buffer_pool_dump_at_shutdown | ON |
| innodb_buffer_pool_dump_now | OFF |
| innodb_buffer_pool_dump_pct | 25 |
| innodb_buffer_pool_filename | ib_buffer_pool |
| innodb_buffer_pool_instances | 1 |
| innodb_buffer_pool_load_abort | OFF |
| innodb_buffer_pool_load_at_startup | ON |
| innodb_buffer_pool_load_now | OFF |
| innodb_buffer_pool_size | 19327352832 |
+-------------------------------------+----------------+
10 rows in set (0.000 sec)
Optimizer trace for the above is: https://pastebin.com/NtJHf9nw
PS: This is just one query that is slow, Overall, the performance has gone down so bad that we have more than 20x slow queries than what we had in mariadb 10.2. I just got the above as an example for investigation.
Thanks
Raja
rajasaur
(111 rep)
Nov 20, 2023, 01:12 PM
• Last activity: Jun 7, 2025, 11:01 PM
4
votes
0
answers
199
views
Postgres: Queries are too slow after upgrading to PG17 from PG15
Most of the queries got slower after upgrading our postgres from version 15 to 17 using pg_upgrade. I reconfirmed that "vaccuum, analyze" were all taken care. To debug, instead of upgrade, I installed two instances one with postgres 15 and another postgres 17 with the same application dump restored....
Most of the queries got slower after upgrading our postgres from version 15 to 17 using pg_upgrade. I reconfirmed that "vaccuum, analyze" were all taken care.
To debug, instead of upgrade, I installed two instances one with postgres 15 and another postgres 17 with the same application dump restored.
Now surprisingly one of the query i took from application which used to execute in 2s in PG15, is now taking 1min+ in PG17. I also observed that some of the operations involving DML operations slowed down too in PG17.
Explain plan of the two queries almost same, all the joins and paths used are exacty same.
Could anybody please provide some insights here?
**PG15 Plan:**
https://explain.depesz.com/s/5PGX
----------------------------------------------------------------------
**PG17 Plan:**
https://explain.depesz.com/s/27vD
Update: I installed PG16 with same dump, and verified that everything seems normal here infact better than PG15, so i just want rule out possibility of PG16 impact here.
**PG16 plan:**
https://explain.depesz.com/s/v0Gc
Sajith P Shetty
(312 rep)
May 13, 2025, 05:52 AM
• Last activity: May 14, 2025, 04:05 AM
1
votes
2
answers
4424
views
All instances upgrade from Command Prompt
I want to create a cmd script to upgrade sql servers in the organization(mostly 2014) to 2016. I've read Microsoft documentation [Install SQL Server 2016 from the Command Prompt][1] regarding this issue. The problem is that there is no option written there to upgrade all server instances like there...
I want to create a cmd script to upgrade sql servers in the organization(mostly 2014) to 2016. I've read Microsoft documentation
Install SQL Server 2016 from the Command Prompt regarding this issue.
The problem is that there is no option written there to upgrade all server instances like there is in installing updates: Installing Updates from the Command Prompt using
/allinstances
.
Is there any way to auto upgrade all instances on the server using one command line?
Michael Cherevko
(742 rep)
Dec 21, 2016, 09:50 AM
• Last activity: May 13, 2025, 01:08 PM
1
votes
1
answers
276
views
Percona-Server: slow TokuDB queries after upgrading from 5.6 to 5.7. ANALYZE TABLE doesn't resolve the problem
After upgrading from Percona-TokuDB 5.6.29-76.2 to 5.7.19-17 we see some very slow queries on some tables without primary keys, but multiple non-unique indexes. The box we migrated to is pretty well equipped (768 GB RAM, PCIe SSDs). We used mysql_upgrade after migration. After investigating https://...
After upgrading from Percona-TokuDB 5.6.29-76.2 to 5.7.19-17 we see some very slow queries on some tables without primary keys, but multiple non-unique indexes. The box we migrated to is pretty well equipped (768 GB RAM, PCIe SSDs). We used mysql_upgrade after migration.
After investigating https://dba.stackexchange.com/questions/135180/percona-5-7-tokudb-poor-query-performance-wrong-non-clustered-index-chosen we tried ANALYZE TABLE, even with RECOUNT_ROWS, REPAIR TABLE, ALTER TABLE *** FORCE without any effect.
Typical table structure:
CREATE TABLE
letter_archiv_12375
(
user_id
int(12) unsigned NOT NULL DEFAULT '0',
letter_id
mediumint(6) unsigned NOT NULL DEFAULT '0',
crypt_id
bigint(12) unsigned NOT NULL DEFAULT '0',
mailerror
tinyint(1) unsigned NOT NULL DEFAULT '0',
unsubscribe
tinyint(1) unsigned NOT NULL DEFAULT '0',
send_date
date NOT NULL,
code
varchar(255) NOT NULL DEFAULT '',
KEY crypt_id
(crypt_id
),
KEY letter_id
(letter_id
),
KEY user_id
(user_id
)
) ENGINE=TokuDB
A simple query like that takes 4 seconds on a table with 200m rows.
UPDATE hoovie_1.letter_archiv_14167
SET unsubscribe = 1
WHERE letter_id = "784547"
AND user_id = "2881564";
The cardinality values are correct. EXPLAIN will result in:
id select_type table partitions type possible_keys key key_len ref rows filtered Extra
1 UPDATE letter_archiv_14167 NULL range letter_id,user_id letter_id 3 const 1 100.00 Using where
The only solution is to remove and re-create at least one index. After dropping and re-creating the index letter_id the table will perform well (in 0.01 s).
The EXPLAIN will change to
id select_type table partitions type possible_keys key key_len ref rows filtered Extra
1 UPDATE letter_archiv_14167 NULL range user_id,letter_id user_id 4 const 99 100.00 Using where
We have some thousands of TokuDB tables in production - a performance loss of factor 300-500 is a problem.
So we are unsure to migrate to 5.7 - this behaviour could occur even after re-creating all indexes again.
Any ideas?
Ralf Engler
(11 rep)
Dec 18, 2017, 05:55 PM
• Last activity: May 12, 2025, 12:00 AM
1
votes
1
answers
286
views
How to upgrade a MySQL database using numbered scripts based on a version field
I have a database that contains a version table with one value stored in it which is the database version number. I would like to be able to update the database version automatically using numbered scripts. If the database is version "6" and I have a dir of scrips numbered inconsistently as per the...
I have a database that contains a version table with one value stored in it which is the database version number.
I would like to be able to update the database version automatically using numbered scripts.
If the database is version "6" and I have a dir of scrips numbered inconsistently as per the below:
001.sql, 02.sql, 3.sql, 4script.sql, 5.sql, 6.upgrade.sql, 7.sql, 8.data.sql, 009.updated.sql
What is the best way to only run scripts which are numbered greater than the version number and to do this in sequence?
Should I use an SQL script or create a bash script?
I am a little stuck on how to approach this so any pointers or advice will be much appreciated...
Mark Smith
(131 rep)
Aug 5, 2018, 08:32 AM
• Last activity: May 9, 2025, 11:08 PM
0
votes
1
answers
419
views
How can I create a sql dump from postgres that I can restore on a new server with no psql options?
I want to have an easy, repeatable process of upgrading a dockerized postgres database to a new version. Minor and patch versions are easy -- the container can simply be upgraded to the new version, and postgres will read the existing data directory without issues. Major versions are harder because...
I want to have an easy, repeatable process of upgrading a dockerized postgres database to a new version. Minor and patch versions are easy -- the container can simply be upgraded to the new version, and postgres will read the existing data directory without issues. Major versions are harder because the data directory format is not compatible between versions. The versions I am working with at present are 11 (old) and 14 (new).
I don't think I can use
pg_upgrade
because from within an off-the-shelf container, I won't have access to the binary dirs for both versions, so I'm going with the backup-restore approach.
On my new version's container, I want to take advantage of the /docker-entrypoint-initdb.d/
feature of many dockerhub images, where one can mount a set of script files into that directory from the host, and they will be executed the first time the container is run. So, if a db_dump.sql
file, for example, is mounted into that directory, the container will run it with psql -f
the first time it is run, and the contents of that script can be used to populate the data directory of the new, upgraded container.
The benefit of this would be that I don't have to do any manual fuddling with the upgraded container -- it will simply restore the dump when it launches.
My issue has been in generating a dump that will restore in the new container without failing errors. If I use pg_dumpall
from my old version, the script file includes CREATE ROLE postgres;
statements which fail because the postgres
role already exists in a fresh container. If I use pg_dumpall --clean
, the script includes DROP DATABASE
statements for the user databases which fail because those databases (obviously) don't exist in the fresh container.
I did find this answer:
https://dba.stackexchange.com/questions/75033/how-to-restore-everything-including-postgres-role-from-pg-dumpall-backup
... which suggests to me that sed
-ing the output of pg_dumpall
to remove alterations to the postgres
role might be the least worst option, but this feels like I must be missing a much better option.
NReilingh
(785 rep)
Dec 7, 2022, 04:34 PM
• Last activity: May 8, 2025, 10:04 PM
2
votes
1
answers
44
views
"Duplicate entry" error when upgrading MariaDB 10.6 to 11.4
I was trying to upgrade my local development environment (Ubuntu 22.04 WSL) from MariaDB 10.6 to 11.4. After updating the APT repo file, I run and `apt update`, which updated the server version. However, upon running `mariadb-upgrade`, I see this error: ``` Phase 1/8: Checking and upgrading mysql da...
I was trying to upgrade my local development environment (Ubuntu 22.04 WSL) from MariaDB 10.6 to 11.4.
After updating the APT repo file, I run and
apt update
, which updated the server version.
However, upon running mariadb-upgrade
, I see this error:
Phase 1/8: Checking and upgrading mysql database
Processing databases
mysql
mysql.column_stats OK
mysql.columns_priv OK
mysql.db OK
mysql.event OK
mysql.func OK
mysql.global_priv OK
mysql.gtid_slave_pos OK
mysql.help_category OK
mysql.help_keyword OK
mysql.help_relation OK
mysql.help_topic OK
mysql.index_stats OK
mysql.innodb_index_stats OK
mysql.innodb_table_stats OK
mysql.plugin OK
mysql.proc OK
mysql.procs_priv OK
mysql.proxies_priv OK
mysql.roles_mapping OK
mysql.servers OK
mysql.spider_link_failed_log OK
mysql.spider_link_mon_servers OK
mysql.spider_table_crd OK
mysql.spider_table_position_for_recovery OK
mysql.spider_table_sts OK
mysql.spider_tables OK
mysql.spider_xa OK
mysql.spider_xa_failed_log OK
mysql.spider_xa_member OK
mysql.table_stats OK
mysql.tables_priv OK
mysql.time_zone OK
mysql.time_zone_leap_second OK
mysql.time_zone_name OK
mysql.time_zone_transition OK
mysql.time_zone_transition_type OK
mysql.transaction_registry OK
Phase 2/8: Installing used storage engines... Skipped
Phase 3/8: Running 'mysql_fix_privilege_tables'
ERROR 1062 (23000) at line 30: Duplicate entry 'db_626suite-get_98_53_metadata-PROCEDURE' for key 'PRIMARY'
ERROR 1062 (23000) at line 256: Duplicate entry 'db_626suite-get_98_53_metadata-PROCEDURE' for key 'PRIMARY'
ERROR 1062 (23000) at line 304: Duplicate entry 'db_626suite-get_98_53_metadata-PROCEDURE' for key 'PRIMARY'
ERROR 1062 (23000) at line 331: Duplicate entry 'db_626suite-get_98_53_metadata-PROCEDURE' for key 'PRIMARY'
ERROR 1062 (23000) at line 343: Duplicate entry 'db_626suite-get_98_53_metadata-PROCEDURE' for key 'PRIMARY'
FATAL ERROR: Upgrade failed
I do not understand how it is possible that there are duplicate functions, nor how to solve the problem.
Since this is just a development environment, I simply tried dropping the database, but the error persists.
Maybe the new version performs some checks that the old one didn't; but, how can I check for similar inconsistencies before running mariadb-upgrade
in order to get a smooth update?
Matteo Tassinari
(187 rep)
May 6, 2025, 02:13 PM
• Last activity: May 6, 2025, 02:55 PM
0
votes
1
answers
879
views
pg_upgrade fails on pg_restore: error: connection to server at "localhost" port 5433 failed: FATAL: password authentication failed for user "postgres"
I have a PostgreSQL 13 installation on a Windows 10 server (on port 5432) that I want to migrate to a new PG 16.2 installation. To this end, I downloaded the Windows x86-64 installer for PostgreSQL 16.2 here: https://www.enterprisedb.com/downloads/postgres-postgresql-downloads During the run of the...
I have a PostgreSQL 13 installation on a Windows 10 server (on port 5432) that I want to migrate to a new PG 16.2 installation.
To this end, I downloaded the Windows x86-64 installer for PostgreSQL 16.2 here: https://www.enterprisedb.com/downloads/postgres-postgresql-downloads
During the run of the installer I manually set up:
- the default postgres user named
#### Side question: How could I properly (i.e. ending with the same state as a fresh install) re-initialize the PG 16 cluster according to the following
postgres
a password,
- the port of the cluster to 5433,
- the locale to en_US,
The PostgreSQL folder was successfully created at "C:\Program Files\PostgreSQL\16\" with everything it needs. Then it proposed to execute the Stack Builder utility to install add-ons which I only used to install PostGIS.
To test the newly installed PostgreSQL, I added an entry in the %APPDATA%\postgresql\pgpass.conf
file as explained here: https://www.postgresql.org/docs/current/libpq-pgpass.html
The pgpass file looks as follow:
#hostname:port:database:username:password
localhost:5432:*:postgres:************************************
localhost:5433:*:postgres:************************************
Then I successfully connected to PG with this command (it correctly fetched the password from the pgpass file):
"C:\Program Files\PostgreSQL\16\bin\psql.exe" -d postgres://postgres@localhost:5433/postgres
psql (16.2)
WARNING: Console code page (437) differs from Windows code page (1252)
8-bit characters might not work correctly. See psql reference
page "Notes for Windows users" for details.
Type "help" for help.
postgres=# SELECT version();
version
------------------------------------------------------------
PostgreSQL 16.2, compiled by Visual C++ build 1937, 64-bit
(1 row)
I followed the steps as described here https://www.postgresql.org/docs/current/app-pgrestore.html to give pg_upgrade
(16) a try:
1. I gracefully shut down the two PG services (both 13 and 16)
2. Because there is no postgres
user at the operating system level, I created a folder at C:\tmp
with read/write access to everyone and from this folder I run the following command from a cmd.exe terminal:
"C:\Program Files\PostgreSQL\16\bin\pg_upgrade" -b "C:\Program Files\PostgreSQL\13\bin" -B "C:\Program Files\PostgreSQL\16\bin" -d "C:\Program Files\PostgreSQL\13\data" -D "C:\Program Files\PostgreSQL\16\data" -p 5432 -P 5433 -U postgres --check
Performing Consistency Checks
-----------------------------
Checking cluster versions ok
Checking database user is the install user ok
Checking database connection settings ok
Checking for prepared transactions ok
Checking for system-defined composite types in user tables ok
Checking for reg* data types in user tables ok
Checking for contrib/isn with bigint-passing mismatch ok
Checking for incompatible "aclitem" data type in user tables ok
Checking for user-defined encoding conversions ok
Checking for user-defined postfix operators ok
Checking for incompatible polymorphic functions ok
*Clusters are compatible*
I decided to run the exact same command again after removing the --check
flag but it failed:
Performing Consistency Checks
-----------------------------
Checking cluster versions ok
Checking database user is the install user ok
Checking database connection settings ok
Checking for prepared transactions ok
Checking for system-defined composite types in user tables ok
Checking for reg* data types in user tables ok
Checking for contrib/isn with bigint-passing mismatch ok
Checking for incompatible "aclitem" data type in user tables ok
Checking for user-defined encoding conversions ok
Checking for user-defined postfix operators ok
Checking for incompatible polymorphic functions ok
Creating dump of global objects ok
Creating dump of database schemas
ok
Checking for presence of required libraries ok
Checking database user is the install user ok
Checking for prepared transactions ok
Checking for new cluster tablespace directories ok
If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.
Performing Upgrade
------------------
Setting locale and encoding for new cluster ok
Analyzing all rows in the new cluster ok
Freezing all rows in the new cluster ok
Deleting files from new pg_xact ok
Copying old pg_xact to new server ok
Setting oldest XID for new cluster ok
Setting next transaction ID and epoch for new cluster ok
Deleting files from new pg_multixact/offsets ok
Copying old pg_multixact/offsets to new server ok
Deleting files from new pg_multixact/members ok
Copying old pg_multixact/members to new server ok
Setting next multixact ID and offset for new cluster ok
Resetting WAL archives ok
Setting frozenxid and minmxid counters in new cluster ok
Restoring global objects in the new cluster ok
Restoring database schemas in the new cluster
template1
*failure*
Consult the last few lines of "C:/Program Files/PostgreSQL/16/data/pg_upgrade_output.d/20240415T170235.602/log/pg_upgrade_dump_1.log" for
the probable cause of the failure.
Failure, exiting
The mentioned pg_upgrade_dump_1.log
file contains:
command: "C:/Program Files/PostgreSQL/16/bin/pg_dump" --port 5432 --username postgres --schema-only --quote-all-identifiers --binary-upgrade --format=custom --file="C:/Program Files/PostgreSQL/16/data/pg_upgrade_output.d/20240415T173939.552/dump/pg_upgrade_dump_1.custom" ^"dbname^=template1^" >> "C:/Program Files/PostgreSQL/16/data/pg_upgrade_output.d/20240415T173939.552/log/pg_upgrade_dump_1.log" 2>&1
command: "C:/Program Files/PostgreSQL/16/bin/pg_restore" --port 5433 --username postgres --clean --create --exit-on-error --verbose --dbname postgres "C:/Program Files/PostgreSQL/16/data/pg_upgrade_output.d/20240415T173939.552/dump/pg_upgrade_dump_1.custom" >> "C:/Program Files/PostgreSQL/16/data/pg_upgrade_output.d/20240415T173939.552/log/pg_upgrade_dump_1.log" 2>&1
pg_restore: connecting to database for restore
pg_restore: error: connection to server at "localhost" (::1), port 5433 failed: FATAL: password authentication failed for user "postgres"
password retrieved from file "C:\Users\\AppData\Roaming/postgresql/pgpass.conf"
I'm wondering why this automatic tool fails at this stage, because if I remove the PG 16 entry in the pgpass file, the pg_upgrade
command fails much faster, which means that the pgpass file is correctly used at least for dumping the schemas, but apparently not for restoring them in the new PG 16 cluster.
Can anyone explain why exactly pg_upgrade
fails and how to solve this problem? Because I really run both the PG 16 installer and the pg_upgrade
tool in a vanilla way, without fancy options or setup.
which means that the PG 16 config files (such as pg_hba.conf
or postgresql.conf
) were not touched after the installation.
The pg_hba.conf
file of PG 16 is exactly the same as the one described [here](https://stackoverflow.com/a/78258971/6630397) .
The most relevant part being:
# PostgreSQL Client Authentication Configuration File
# ===================================================
# comments (...)
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all scram-sha-256
# IPv4 local connections:
host all all 127.0.0.1/32 scram-sha-256
# IPv6 local connections:
host all all ::1/128 scram-sha-256
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all scram-sha-256
host replication all 127.0.0.1/32 scram-sha-256
host replication all ::1/128 scram-sha-256
The PG 13 pg_hba.conf
file looks like this:
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all scram-sha-256
# IPv4 local connections:
host all all 127.0.0.1/32 md5
host all postgres 0.0.0.0/0 md5
# IPv6 local connections:
host all all ::1/128 md5
I'm therefore wondering if the issue would not be related to a different password encoding... But the documentation of the pg_upgrade
tool doesn't say anything about it.
#### Side question: How could I properly (i.e. ending with the same state as a fresh install) re-initialize the PG 16 cluster according to the following
pg_upgade
sentence: "If pg_upgrade fails after this point, you must re-initdb the new cluster before continuing." ?
s.k
(424 rep)
Apr 15, 2024, 06:04 PM
• Last activity: May 3, 2025, 01:08 PM
0
votes
1
answers
326
views
Transactional replication upgrade from SQL Server 2017
We would like to inplace upgrade of transactional replication topology from SQL Server 2017 to SQL2019 with minimal downtime. The following is the setup. All the all current instances are on SQL Server 2017 enterprise * Server A has 2 databases serving as publisher * Server B has 2 databases serving...
We would like to inplace upgrade of transactional replication topology from SQL Server 2017 to SQL2019 with minimal downtime. The following is the setup. All the all current instances are on SQL Server 2017 enterprise
* Server A has 2 databases serving as publisher
* Server B has 2 databases serving as subscriber
* Server C is the remote distributor.
From inplace upgrade perspective, we have to first upgrade distributor --->publisher--->subscriber.
I would like to know the rollback steps for distributor database if inplace upgrade fails to SQL Server 2019 and we have to revert to SQL Server 2017 with minimal possible downtimes.
Requesting some insights on procedure to do the same.
Kevin
kevin
(133 rep)
Oct 8, 2020, 05:36 AM
• Last activity: May 3, 2025, 05:09 AM
Showing page 1 of 20 total questions