Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
822 views
Redis docker container not coming up with custom config file
I'm using the below command to run the Redis docker container docker run -tid -v /data1/REDIS_DOCKER_IMAGE/6379/redis.conf:/usr/local/etc/redis/6379/redis.conf -p 6379:6379 --name node_6379 redis:5.0.8 redis-server /usr/local/etc/redis/redis.conf After I run this command, I check `"docker ps"` but i...
I'm using the below command to run the Redis docker container docker run -tid -v /data1/REDIS_DOCKER_IMAGE/6379/redis.conf:/usr/local/etc/redis/6379/redis.conf -p 6379:6379 --name node_6379 redis:5.0.8 redis-server /usr/local/etc/redis/redis.conf After I run this command, I check "docker ps" but it comes up empty. There are no logs shown by docker logs so I don't know what's going wrong.
Vishal Sharma (71 rep)
Apr 8, 2020, 07:45 AM • Last activity: Aug 4, 2025, 08:03 PM
0 votes
1 answers
5576 views
Using pgagent with docker?
I wanted to dive into the world of postgresql so I installed both postgresql and pgadmin using docker (with the portainer web ui) and I found out that for automatic jobs you gotta install pgagent too but I haven't been able to find any information on running it with docker. I found this image https:...
I wanted to dive into the world of postgresql so I installed both postgresql and pgadmin using docker (with the portainer web ui) and I found out that for automatic jobs you gotta install pgagent too but I haven't been able to find any information on running it with docker. I found this image https://hub.docker.com/r/chiavegatto/postgres-pgagent but portainer throws a 400 error code when trying to deploy the container which I haven't been able to figure out and I'm not quite sure of how this setup would work either. I saw it executes the following command pgagent -f hostaddr=postgres dbname=test user=postgres port=5432 at the entrypoint but what if I want to schedule jobs on multiple databases? - Has anyone been able to run pgagent with docker? - Is there any image of postgresql that comes with the agent preinstalled?
Jose134 (11 rep)
Mar 2, 2023, 08:17 PM • Last activity: Jul 30, 2025, 11:02 PM
4 votes
3 answers
141 views
Can Standard Developer Edition be used in Docker?
I hear that the new [Standard Developer](https://learn.microsoft.com/en-us/sql/sql-server/editions-and-components-of-sql-server-2025?view=sql-server-ver17#sql-server-editions) edition is available in the current SQL Server 2025 preview. But how can I get it in Docker? [SQL Server 2022's documentatio...
I hear that the new [Standard Developer](https://learn.microsoft.com/en-us/sql/sql-server/editions-and-components-of-sql-server-2025?view=sql-server-ver17#sql-server-editions) edition is available in the current SQL Server 2025 preview. But how can I get it in Docker? [SQL Server 2022's documentation](https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-docker-container-deployment?view=sql-server-ver17&pivots=cs1-bash#production) shows us how to specify specific editions when creating a container.
docker run --name sqlenterprise \
-e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=' \
-e 'MSSQL_PID=EnterpriseCore' -p 1433:1433 \
-d mcr.microsoft.com/mssql/server:2022-latest
but what -e do I used for Standard Developer? In other words, what goes here?
docker pull mcr.microsoft.com/mssql/server:2025-latest

docker run -e "ACCEPT_EULA=Y" \
-e "MSSQL_SA_PASSWORD=YourPasswordMake~TSp3cial" \
-e "MSSQL_PID=?????????????????????????????????????????????" \
-p 1433:1433 \
--name sql2025P1 --hostname sql2025P1 \
-d mcr.microsoft.com/mssql/server:2025-latest

docker start sql2025P1
J. Mini (1225 rep)
Jun 9, 2025, 10:17 PM • Last activity: Jul 27, 2025, 01:45 PM
1 votes
1 answers
56 views
mariadb docker fails to Initialize database contents from dump file
I have a Nextcloud server set up that uses docker image mariadb:10.11. I use mysqldump to back up the database. Where I am having trouble is trying to restore that to a test system as described here https://hub.docker.com/_/mariadb#initializing-the-database-contents. For the restore I added the extr...
I have a Nextcloud server set up that uses docker image mariadb:10.11. I use mysqldump to back up the database. Where I am having trouble is trying to restore that to a test system as described here https://hub.docker.com/_/mariadb#initializing-the-database-contents . For the restore I added the extra service "adminer" so I could check it out. I also added a new volume to mount the dump file - /data/blue/nextcloud/db_backup/db_dump_nextcloud.sql:/docker-entrypoint-initdb.d/db_dump_nextcloud.sql. I then started it with docker-compose up -d adminer. On the first try I saw that there were 52 (I think) tables in the nextcloud database so I thought we was good, but when starting nextcloud it was giving me errors about missing tables. A few days later I got around to trying it again and now it only gets part of one table downloaded. The last time or two I tested by deleting the database volume and then just start the database with docker-compose up db. I think in the logs I am seeing the data be dumped into one of the tables and then the Lost connection error: ... db-1 | ('3507453_0','3507453_0',1,3507453,'�\0\0\0\0\0\0\0\0;\0\0\0 c�ZB�J@K�4�T@ �c�J@�]K��T@�>W[��J@�H�}�T@��k ��J@�A�f�T@33333�J@?�ܵ��T@�1��%�J@z�):��T@���<,�J@;pΈ��T@1�*▒��J@�,C��T@/�$��J@��b��T@����J@����M�T@��ڊ��J@b��4��T@+�٦J@�ZӼ��T@|a2U�J@�Q�@��{���J@�9#J{�T@�HP��J@��g���T@4��7��J@�x�&1�T@�����J@��k ��T@�C�l��J@�c�]K�T@gDio��J@2U0*��T@-C���J@_�Q�T@d�]KȯJ@��|г�T@ףp=\n�J@M��St�T@�3���J@�a2�T@-����J@�N@a�T@Ԛ���J@��C�l�T@Gr���J@K�=�U@P�▒sײJ@���oU@\'�W�J@�HPU@6<�R��J@�;NU@���{��J@�E��U@������J@k+���U@�2ı.�J@ޓ��ZU@�A�f�J@�p=\n�U@���ZӬJ@ףp=\nU@�rh���J@��C�lU@ݵ�|ЫJ@�c]�FU@��C��J@�ܵ�|U@�$���J@$(~��U@� ��J@Gr��U@���~��J@▒&S�U@�d�`T�J@t���U@�o_ΩJ@\\���(U@�&S�J@8��d�U@▒&S��J@=\nףpU@�W�2ĩJ@���QIU@S��:�J@���<,U@ףp=\n�J@S�!�uU@���H�J@S�!�uU@�٬�\\�J@?�ܵU@�e�c]�J@ +�U�Zd�J@��|?5U@�z�G�J@A�c�]�T@�HP��J@:#J{��T@�����J@F�����T@����J@?5^�I�T@�����J@�K7�A�T@��DؠJ@$�����T@x $(�J@-!�l�T@ c�ZB�J@K�4�T@') db-1 | -------------- db-1 | db-1 | ERROR 2013 (HY000) at line 316271: Lost connection to server during query db-1 | /usr/local/bin/docker-entrypoint.sh: line 298: 88 Segmentation fault (core dumped) "$@" --skip-networking --default-time-zone=SYSTEM --socket="${SOCKET}" --wsrep_on=OFF --expire-logs-days=0 --skip-slave-start --loose-innodb_buffer_pool_load_at_startup=0 db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Last binlog file './binlog.000001', position 272159596 db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: 128 rollback segments are active. db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Starting in background the rollback of recovered transactions db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Rollback of non-prepared transactions completed db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: log sequence number 206149883; transaction id 298 db-1 | 2025-07-22 2:31:34 0 [Note] Plugin 'FEEDBACK' is disabled. db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Cannot open '/var/lib/mysql/ib_buffer_pool' for reading: No such file or directory db-1 | 2025-07-22 2:31:34 0 [Note] Recovering after a crash using binlog db-1 | 2025-07-22 2:31:34 0 [Note] Starting table crash recovery... db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Starting recovery for XA transactions... db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Transaction 295 in prepared state after recovery db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Transaction contains changes to 3086 rows db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: 1 transactions in prepared state after recovery db-1 | 2025-07-22 2:31:34 0 [Note] Found 1 prepared transaction(s) in InnoDB db-1 | 2025-07-22 2:31:34 0 [Note] Crash table recovery finished. db-1 | 2025-07-22 2:31:34 0 [Note] Server socket created on IP: '0.0.0.0'. db-1 | 2025-07-22 2:31:34 0 [Note] Server socket created on IP: '::'. db-1 | 2025-07-22 2:31:34 0 [Note] mariadbd: ready for connections. db-1 | Version: '10.11.13-MariaDB-ubu2204-log' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution w Enable Watch The relevant part of my compose file is as follows. The db service is set up the same as on the source container except the new volume. services: adminer: image: adminer restart: always ports: - 8080:8080 depends_on: - db db: image: mariadb:10.11 command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW --log_bin_trust_function_creators=1 restart: always volumes: - db:/var/lib/mysql # For mysqldump conf file for backups. - /home/monterey/confg/restic/db-env.cnf:/app/db-env.cnf:ro - /data/blue/nextcloud/db_backup/db_dump_nextcloud.sql:/docker-entrypoint-initdb.d/db_dump_nextcloud.sql environment: - MARIADB_AUTO_UPGRADE=1 - MARIADB_DISABLE_UPGRADE_BACKUP=1 env_file: - db.env redis: image: redis:alpine restart: always app: #image: nextcloud:apache build: ./nextcloud restart: always volumes: ... Edit - I did a --no-data dump and was able to restore that. It gave me a 195 tables and took a little over a minute. I did not see it dumping out ever line of sql so I wonder if when it was doing that before it was trying to say there was an error.
JohnT (11 rep)
Jul 22, 2025, 02:45 AM • Last activity: Jul 22, 2025, 03:46 PM
0 votes
0 answers
53 views
How do I restore my data from hacked database
My database was just hacked. I tried doing some work and could not log in with the created user credentials. When I checked the database, I saw an intruder was removing the users and had created a replication_user, I tried removing them, and before I could run a backup, the intruder had changed the...
My database was just hacked. I tried doing some work and could not log in with the created user credentials. When I checked the database, I saw an intruder was removing the users and had created a replication_user, I tried removing them, and before I could run a backup, the intruder had changed the root password. I have stopped the server, the MySQL server is running in a Docker container, and I have backed up the volume containing all the files, including those in var/lib/mysql. Please, how do I restore it into a new database? The last full backup was from a few days ago
Aijay (1 rep)
Jul 17, 2025, 06:22 PM
0 votes
1 answers
181 views
how to log in as SYS after starting containerized Oracle server
I am trying to start up an oracle database server locally. My goal is to have a schema with a user, what my application can use then. I am somewhat familiar with Oracle database, but I am a beginner with DBA things (previously DBAs created schema / user for me). So I used the "doctorkirk" image: htt...
I am trying to start up an oracle database server locally. My goal is to have a schema with a user, what my application can use then. I am somewhat familiar with Oracle database, but I am a beginner with DBA things (previously DBAs created schema / user for me). So I used the "doctorkirk" image: https://registry.hub.docker.com/r/doctorkirk/oracle-19c I followed the instructions and I executed this command to get the image: sudo docker pull doctorkirk/oracle-19c then started up the image with this command: sudo docker run --name oracle-19c -e ORACLE_PDB=orcl -e ORACLE_PWD=password -v /opt/oracle/oradata:/opt/oracle/oradata -p 11521:1521 -p 15500:5500 doctorkirk/oracle-19c The output is this: cat: /sys/fs/cgroup/memory/memory.limit_in_bytes: No such file or directory cat: /sys/fs/cgroup/memory/memory.limit_in_bytes: No such file or directory /opt/oracle/runOracle.sh: line 102: [: -lt: unary operator expected ORACLE PASSWORD FOR SYS, SYSTEM AND PDBADMIN: password LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 15-JAN-2025 12:25:50 Copyright (c) 1991, 2020, Oracle. All rights reserved. Starting /opt/oracle/product/19c/dbhome_1/bin/tnslsnr: please wait... TNSLSNR for Linux: Version 19.0.0.0.0 - Production System parameter file is /opt/oracle/product/19c/dbhome_1/network/admin/listener.ora Log messages written to /opt/oracle/diag/tnslsnr/95fe0518c5bb/listener/alert/log.xml Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1))) Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521))) Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 19.0.0.0.0 - Production Start Date 15-JAN-2025 12:25:50 Uptime 0 days 0 hr. 0 min. 0 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /opt/oracle/product/19c/dbhome_1/network/admin/listener.ora Listener Log File /opt/oracle/diag/tnslsnr/95fe0518c5bb/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521))) The listener supports no services The command completed successfully [WARNING] [DBT-06208] The 'SYS' password entered does not conform to the Oracle recommended standards. CAUSE: a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. b.The password entered is a keyword that Oracle does not recommend to be used as password ACTION: Specify a strong password. If required refer Oracle documentation for guidelines. [WARNING] [DBT-06208] The 'SYSTEM' password entered does not conform to the Oracle recommended standards. CAUSE: a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. b.The password entered is a keyword that Oracle does not recommend to be used as password ACTION: Specify a strong password. If required refer Oracle documentation for guidelines. Prepare for db operation 10% complete Copying database files 40% complete Creating and starting Oracle instance 42% complete 46% complete 50% complete 54% complete 60% complete Completing Database Creation 66% complete 69% complete 70% complete Executing Post Configuration Actions 100% complete Database creation complete. For details check the logfiles at: /opt/oracle/cfgtoollogs/dbca/ORCLCDB. Database Information: Global Database Name:ORCLCDB System Identifier(SID):ORCLCDB Look at the log file "/opt/oracle/cfgtoollogs/dbca/ORCLCDB/ORCLCDB.log" for further details. SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jan 15 12:33:00 2025 Version 19.9.1.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.9.1.0.0 SQL> System altered. SQL> System altered. SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.9.1.0.0 The Oracle base remains unchanged with value /opt/oracle ######################### DATABASE IS READY TO USE! ######################### The following output is now a tail of the alert.log: 2025-01-15T12:33:00.200091+00:00 AQPC started with pid=43, OS id=2837 Starting background process CJQ0 2025-01-15T12:33:00.279886+00:00 CJQ0 started with pid=48, OS id=2849 Completed: ALTER DATABASE OPEN 2025-01-15T12:33:00.663094+00:00 ALTER SYSTEM SET control_files='/opt/oracle/oradata/ORCLCDB/control01.ctl' SCOPE=SPFILE; 2025-01-15T12:33:00.669302+00:00 ALTER SYSTEM SET local_listener='' SCOPE=BOTH; 2025-01-15T12:33:00.765152+00:00 QPI: opatch file present, opatch QPI: qopiprep.bat file present 2025-01-15T12:33:02.444120+00:00 Shared IO Pool defaulting to 128MB. Trying to get it from Buffer Cache for process 2811. 2025-01-15T12:33:02.505232+00:00 Resize operation completed for file# 3, old size 450560K, new size 460800K 2025-01-15T12:33:02.579912+00:00 =========================================================== Dumping current patch information =========================================================== Patch Id: 29585399 Patch Description: OCW RELEASE UPDATE 19.3.0.0.0 (29585399) Patch Apply Time: 2019-04-18T07:21:33Z Bugs Fixed: 3,27222128,27572040,27604329,27760043,27877830,28302580,28470673, 28621543,28642469,28699321,28710663,28755846,28772816,28785321,28800508, 28808652,28815557,28847541,28847572,28870496,28871040,28874416,28877252, ... 31781897,31792615,31796208,31796277,31816631,31820859,31833172,31867037, 31872230,31876368,31886547,31888148,31897786,31905033,31909295,31921267, 31927930,31935717,31952052,32007698,32010707,32050048,32069834,32089820, 32105135,32129659,32172777,32212635,32234161,32296941,32321765 =========================================================== 2025-01-15T12:37:18.357290+00:00 *********************************************************************** Fatal NI connect error 12537, connecting to: (ADDRESS=(PROTOCOL=tcp)(HOST=172.17.0.1)(PORT=55358)) VERSION INFORMATION: TNS for Linux: Version 19.0.0.0.0 - Production Oracle Bequeath NT Protocol Adapter for Linux: Version 19.0.0.0.0 - Production TCP/IP NT Protocol Adapter for Linux: Version 19.0.0.0.0 - Production Version 19.9.1.0.0 Time: 15-JAN-2025 12:37:18 Tracing not turned on. Tns error struct: ns main err code: 12537 TNS-12537: TNS:connection closed ns secondary err code: 12560 nt main err code: 507 TNS-00507: Connection closed nt secondary err code: 0 nt OS err code: 0 2025-01-15T12:37:18.362695+00:00 opiodr aborting process unknown ospid (3332) as a result of ORA-609 So to me it seems that the database is up and running, and the sys password is password. However, when I try to connect as sys (in order to create my schema) I have this error: ~/tools/instantclient_21_12$ ./sqlplus -S "sys/password@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host=127.0.0.1)(Port=11521))(CONNECT_DATA=(SERVICE_NAME=orcl)))" ERROR: ORA-12514: TNS:listener does not currently know of service requested in connect descriptor In sqldeveloper I have the same error. What is my mistake?
riskop (101 rep)
Jan 15, 2025, 03:44 PM • Last activity: Jul 13, 2025, 11:00 AM
0 votes
2 answers
81 views
How to upgrade MongoDB Arbiter node?
Normally I don't ask questions but I'm about at my wits end and foresee myself struggling with this for a while. To the question reviewers, I have not found any questions referring to either the Mongo versions I'm having trouble with nor my error specific to the arbiter node. **TLDR:** The arbiter i...
Normally I don't ask questions but I'm about at my wits end and foresee myself struggling with this for a while. To the question reviewers, I have not found any questions referring to either the Mongo versions I'm having trouble with nor my error specific to the arbiter node. **TLDR:** The arbiter is failing on an upgrade from v6 to v7 because of an invalid featureCompabilityVersion I've read that it should not care about or use. Am I missing something obvious relating to the configuration/upgrade of the arbiter node? **Long version:** I'm using Docker Compose to manage a replica set of Mongo databases consisting of 1 primary, 1 secondary, and 1 arbiter. They all use the docker.io/bitnami/mongodb image and have volumes to persist whatever data they have to write. My task is to upgrade their images from v5 to v7 using the [Mongo instructions](https://www.mongodb.com/docs/manual/release-notes/7.0-upgrade-replica-set/) . Upgrading from v5 to v6 was uneventful since v6 is backwards compatible and just required updating the image version. To upgrade to v7 I logged into the primary node's admin database and ran db.adminCommand( { setFeatureCompatibilityVersion: "6.0" } ) followed by db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1}) to confirm the change took. I updated the image versions in the compose file from v6 to v7 and restarted. The primary/secondary nodes are fine but the arbiter spews this error before shutting down, > UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document (ERROR: Location4926900: Invalid featureCompatibilityVersion document in admin.system.version: { _id: \"featureCompatibilityVersion\", version: \"5.0\" }. See https://docs.mongodb.com/master/release-notes/6.0-compatibility/#feature-compatibility . :: caused by :: Invalid feature compatibility version value '5.0'; expected '6.0' or '6.3' or '7.0'. See https://docs.mongodb.com/master/release-notes/6.0-compatibility/#feature-compatibility.) . If the current featureCompatibilityVersion is below 6.0, see the documentation on upgrading at https://docs.mongodb.com/master/release-notes/6.0/#upgrade-procedures . Now I've read that the arbiter should contain no data and that the arbiter ignores the FCV in favor of the backwards compatibility reported by the binaries, which if true, should be v6, not v5 as the error states. I tried to authenticate to the arbiter to set the FCV there but I get command setFeatureCompatibilityVersion requires authentication. Am I missing something obvious relating to the configuration/upgrade of the arbiter node?
RiverHeart (115 rep)
Mar 4, 2025, 05:54 PM • Last activity: Jul 7, 2025, 10:55 PM
2 votes
1 answers
1459 views
MySQL group replication plugin won't recognize host's IP address?
I have pulled the current mysql-server image (mysql-server) on 2 Ubuntu 18.04 machines and I want to set up group replication. This is how I start the mysql container on each machine (`--server-id` is 2 for the second machine): docker run -d --name=gr-mysql \ -v /var/lib/mysql:/var/lib/mysql \ -e MY...
I have pulled the current mysql-server image (mysql-server) on 2 Ubuntu 18.04 machines and I want to set up group replication. This is how I start the mysql container on each machine (--server-id is 2 for the second machine): docker run -d --name=gr-mysql \ -v /var/lib/mysql:/var/lib/mysql \ -e MYSQL_ROOT_PASSWORD=secret -e MYSQL_ROOT_HOST=% \ -p 3306:3306 -p 33061:33061 \ mysql/mysql-server:8.0 \ --port=3306 \ --log-error-verbosity=3 \ --server-id=1 \ --log-bin='mysql-bin-1.log' \ --enforce-gtid-consistency=ON \ --log-slave-updates=ON \ --gtid-mode=ON \ --transaction-write-set-extraction=XXHASH64 \ --binlog-checksum=NONE \ --master-info-repository=TABLE \ --relay-log-info-repository=TABLE \ --plugin-load=group_replication.so \ --relay-log-recovery=ON \ --loose-group_replication_start_on_boot=OFF \ --loose-group_replication_group_name=a_valid_uuid \ --loose-group_replication_local_address=1.2.3.1:33061 \ --loose-group_replication_group_seeds=1.2.3.1:33061,1.2.3.2:33061 \ --loose-group_replication_single_primary_mode=OFF \ --loose-group_replication_enforce_update_everywhere_checks=ON Now I try to configure group replication on the first machine: docker exec -t aerobase-mysql mysql -uroot -psecret \ -e "SET @@GLOBAL.group_replication_bootstrap_group=1;" \ -e "create user 'repl'@'%';" \ -e "GRANT REPLICATION SLAVE ON *.* TO repl@'%';" \ -e "flush privileges;" \ -e "change master to master_user='repl', master_password='secret' for channel 'group_replication_recovery';" \ -e "START GROUP_REPLICATION;" \ -e "SET @@GLOBAL.group_replication_bootstrap_group=0;" And I'm prompted with this error: ERROR 3096 (HY000) at line 1: The START GROUP_REPLICATION command failed as there was an error when initializing the group communication layer. Checking the container log it seems like MySQL can't get the host's IP address: 2020-09-20T22:17:03.060972Z 220 [System] [MY-013587] [Repl] Plugin group_replication reported: 'Plugin 'group_replication' is starting.' 2020-09-20T22:17:03.061056Z 220 [Note] [MY-011716] [Repl] Plugin group_replication reported: 'Current debug options are: 'GCS_DEBUG_NONE'.' 2020-09-20T22:17:03.062114Z 221 [System] [MY-011565] [Repl] Plugin group_replication reported: 'Setting super_read_only=ON.' 2020-09-20T22:17:03.062187Z 220 [Note] [MY-011673] [Repl] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"' 2020-09-20T22:17:03.063052Z 220 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Debug messages will be sent to: asynchronous::/var/lib/mysql/GCS_DEBUG_TRACE' 2020-09-20T22:17:03.063205Z 220 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] There is no local IP address matching the one configured for the local node (1.2.3.1:33061).' 2020-09-20T22:17:03.063415Z 220 [ERROR] [MY-011674] [Repl] Plugin group_replication reported: 'Unable to initialize the group communication engine' 2020-09-20T22:17:03.063434Z 220 [ERROR] [MY-011637] [Repl] Plugin group_replication reported: 'Error on group communication engine initialization' 2020-09-20T22:17:03.063444Z 220 [Note] [MY-011649] [Repl] Plugin group_replication reported: 'Requesting to leave the group despite of not being a member' 2020-09-20T22:17:03.063451Z 220 [ERROR] [MY-011718] [Repl] Plugin group_replication reported: 'Error calling group communication interfaces while trying to leave the group' 2020-09-20T22:17:03.063586Z 221 [System] [MY-011566] [Repl] Plugin group_replication reported: 'Setting super_read_only=OFF.' In the ifconfig output I can see the host's IP address and that it matches what I entered for MySQL: docker0: flags=4163 mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 01:23:45:67:89:00 txqueuelen 0 (Ethernet) RX packets 146483 bytes 312211022 (312.2 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 163698 bytes 12222756 (12.2 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens192: flags=4163 mtu 9000 inet 1.2.3.1 netmask 255.255.255.0 broadcast 1.2.3.255 ether 00:11:22:33:44:55 txqueuelen 13888 (Ethernet) RX packets 23255307 bytes 10382649061 (10.3 GB) RX errors 0 dropped 68 overruns 0 frame 0 TX packets 393777 bytes 34493066 (34.4 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 144 bytes 7948 (7.9 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 144 bytes 7948 (7.9 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth0b41841: flags=4163 mtu 1500 ether 66:55:44:33:22:11 txqueuelen 0 (Ethernet) RX packets 944 bytes 2086507 (2.0 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1058 bytes 81312 (81.3 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Before I can have group replication I need to make MySQL accept the host's IP. I tried adding an IP whitelist and even using host names instead of IP addresses, to no avail. I don't have firewall or SELinux enabled. Thanks! (Question has been moved from SO)
towel (121 rep)
Sep 21, 2020, 06:27 AM • Last activity: Jul 2, 2025, 09:03 PM
0 votes
1 answers
216 views
How to restore a SQL Server bak file (Enabled FileStream ) to Docker version SQL Server?
I have SQL Server 2014 installed on Windows and enabled the FileStream feature. I backed up the database to a .bak file. Now I want to restore the file to my Docker container of SQL Server 2019. However, when I try to restore the .bak file, it fails. I searched for a solution and found out that the...
I have SQL Server 2014 installed on Windows and enabled the FileStream feature. I backed up the database to a .bak file. Now I want to restore the file to my Docker container of SQL Server 2019. However, when I try to restore the .bak file, it fails. I searched for a solution and found out that the FileStream feature is not supported on the Docker version of SQL Server. Is there any way to restore the .bak file to my Docker version of SQL Server?
XBruce (1 rep)
Jun 17, 2023, 07:42 AM • Last activity: Jun 17, 2025, 05:04 AM
0 votes
1 answers
998 views
Docker postgres work_mem not using ram
I have set `shared_buffers` to 256MB. Using `pg_buffercache` extension I see that all of the buffers are in use: SELECT pg_size_pretty(COUNT(*)*8192) as used FROM pg_buffercache; used ---------------- 256 MB Now the problem is that when I use docker stats to view the database container memory it sho...
I have set shared_buffers to 256MB. Using pg_buffercache extension I see that all of the buffers are in use: SELECT pg_size_pretty(COUNT(*)*8192) as used FROM pg_buffercache; used ---------------- 256 MB Now the problem is that when I use docker stats to view the database container memory it shows: NAME CPU % MEM USAGE / LIMIT MEM % db 0.00% 31.07MiB / 1GiB 3.03% Where is the shared_buffers memory stored? Shouldn't be in ram and displayed in docker stats?
cdalxndr (161 rep)
Aug 4, 2021, 10:10 PM • Last activity: Jun 14, 2025, 02:00 PM
1 votes
1 answers
1053 views
How to Export MS SQL DB from Docker-Linux to Access format
I have a MS SQL database which is currently up and running on my Mac within a Docker container that is hosting the Ubuntu Linux version of MS SQL Server. I need to export the database to a format that can be imported into an Access (yes, *Access*) database. Alternatively, I would need to export all...
I have a MS SQL database which is currently up and running on my Mac within a Docker container that is hosting the Ubuntu Linux version of MS SQL Server. I need to export the database to a format that can be imported into an Access (yes, *Access*) database. Alternatively, I would need to export all 300+ tables to CSV, and hopefully would not need to do them one by one. Is there any way to export all tables en masse to CSV? But this is an alternative option. I still prefer the Access-compatible export since our new SIS vendor apparently uses it. Thanks!
zeeple (109 rep)
Mar 31, 2020, 07:20 PM • Last activity: Jun 10, 2025, 05:08 PM
4 votes
2 answers
1952 views
Restore MySQL dump to a container fails with ERROR 1114 "The table is full"
I am trying to restore a MySQL dump of size around 18GB to another MySQL server, which is running inside a container using this command: mysql -h example.com -u user -p matomo ERROR 1114 (HY000) at line 7238: The table 'piwik_log_link_visit_action' is full Many other small tables are copied successf...
I am trying to restore a MySQL dump of size around 18GB to another MySQL server, which is running inside a container using this command: mysql -h example.com -u user -p matomo ERROR 1114 (HY000) at line 7238: The table 'piwik_log_link_visit_action' is full Many other small tables are copied successfully, but while coping this table it fails with above error. The size of this table is more than 2GB. Based on different suggestions available on Stack Overflow, I tried each one but nothing worked. I tried adding 'autoextend' to the my.cnf file: innodb_data_file_path=ibdata1:10M:autoextend I also tried to increase the tmp_table_size and heap_table_size by adding following parameter to the my.cnf file: tmp_table_size=2G max_heap_table_size=2G Also, I made sure that the server (from where I am running the dump restore command) has enough space (more than 20GB of storage available). But nothing worked. I tried debugging this more and found that, the docker container where MySQL is running has *overlay* filesystem of size 5GB which starts getting filled and as soon as it fills 100%, I get above error. Volume mounted on the container is of more than 30GB size. I am not sure from where this *overlay* file system is coming in docker. Overlay is something coming from docker I guess, but not sure where I can increase its size. docker_size I can't even go inside the overlay directory and keep deleting or freeing the space. Can anyone please help me here. ### my.cnf file
[mysqladmin]
user=user1

[mysqld]
skip_name_resolve
explicit_defaults_for_timestamp
basedir=/opt/bitnami/mariadb
port=3306
tmpdir=/opt/bitnami/mariadb/tmp
socket=/opt/bitnami/mariadb/tmp/mysql.sock
pid_file=/opt/bitnami/mariadb/tmp/mysqld.pid
max_allowed_packet=256MB
bind_address=0.0.0.0
log_error=/opt/bitnami/mariadb/logs/mysqld.log
character_set_server=utf8
collation_server=utf8_general_ci
plugin_dir=/opt/bitnami/mariadb/plugin
innodb_data_file_path=ibdata1:10M:autoextend:max:10G
max_heap_table_size=2G
tmp_table_size=2G

[client]
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
default_character_set=UTF8
plugin_dir=/opt/bitnami/mariadb/plugin

[manager]
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
pid_file=/opt/bitnami/mariadb/tmp/mysqld.pid
!include /opt/bitnami/mariadb/conf/bitnami/my_custom.cnf
undefined (151 rep)
Jul 18, 2020, 11:39 AM • Last activity: Jun 9, 2025, 11:09 AM
0 votes
1 answers
234 views
SQLServer replication between windows and linux server
I'm trying to setup a replication between a SQLServer 2016 in a windows server and a SQLServer 2019 host in linux server. The linux instance run in a kubernetes setup and the docker image is simcube/mssqlserver-2019. I set my publication as a transactional pull publication and the distribution defau...
I'm trying to setup a replication between a SQLServer 2016 in a windows server and a SQLServer 2019 host in linux server. The linux instance run in a kubernetes setup and the docker image is simcube/mssqlserver-2019. I set my publication as a transactional pull publication and the distribution default snapshot folder is D:\Data\ReplicationData. When I check the synchronization status, I have this error: The process could not read file 'D:\Data\ReplicationData\unc\SUST5050_CORPORATIF PROD_WEBSITE\20230207080810\UtilisationExplosee2_100.pre' due to OS error 5. Do you have any clues on what could be the problem? I verified that the Agent process account and the subscriber connection credentials were correct. I openned port 137, 138, 139, 445, 1433 on the linux server.
Alexandre Beaulieu
Feb 7, 2023, 01:19 PM • Last activity: Jun 1, 2025, 10:02 PM
0 votes
1 answers
562 views
Binary path in pgadmin
I am running pgadmin via Docker desktop. I am unable to restore a sql file in the database. I get the following message - "Please configure the PostgreSQL Binary Path in the Preferences dialog." In the case of PostgreSQL installed in the device, I am aware about assigning the binary path. However in...
I am running pgadmin via Docker desktop. I am unable to restore a sql file in the database. I get the following message - "Please configure the PostgreSQL Binary Path in the Preferences dialog." In the case of PostgreSQL installed in the device, I am aware about assigning the binary path. However in this case, pgadmin is accessed via docker. how can I resolve this issue?
Rambo John (1 rep)
Aug 1, 2023, 02:14 PM • Last activity: May 31, 2025, 09:03 PM
0 votes
1 answers
519 views
pgpass entries for multiple SSH tunnels with randomly assigned local ports using the same credentials but different passwords
Let's say I have 4 SSH tunnels to different remote servers, each having different PostgreSQL databases up and running. Each tunnel is mapped to a specific port on my local machine (localhost) as follows: ```ssh ssh -f -N -L 1234:127.0.0.1:5432 userA@serverA ssh -f -N -L 1235:127.0.0.1:5432 userB@ser...
Let's say I have 4 SSH tunnels to different remote servers, each having different PostgreSQL databases up and running. Each tunnel is mapped to a specific port on my local machine (localhost) as follows:
ssh -f -N -L 1234:127.0.0.1:5432 userA@serverA
ssh -f -N -L 1235:127.0.0.1:5432 userB@serverB
ssh -f -N -L 1236:127.0.0.1:5432 userC@serverC
ssh -f -N -L 1237:127.0.0.1:5432 userD@serverD
Now, in my PostgreSQL .pgpass file I can refer for example to my databases with:
# hostname:port:database:username:password
127.0.0.1:1234:database_a:pg_user_a:postgres_password_a
127.0.0.1:1235:database_b:pg_user_b:postgres_password_b
127.0.0.1:1236:database_c:pg_user_c:postgres_password_c
127.0.0.1:1237:database_d:pg_user_d:postgres_password_d
And this is fine. But now, within the pgAdmin4 docker image (https://hub.docker.com/r/dpage/pgadmin4/) the port mapping on the docker 'localhost' cannot be chosen by the user; it's randomly getting assigned a value. Therefore, and because I cannot guess in advance all the ports that will be used for the tunneling, I must change the .pgpass file using wildcards in this way:
127.0.0.1:*:database_a:pg_user_a:postgres_password_a
127.0.0.1:*:database_b:pg_user_b:postgres_password_b
127.0.0.1:*:database_c:pg_user_c:postgres_password_c
127.0.0.1:*:database_d:pg_user_d:postgres_password_d
This is always fine. I guess. (unless I know which IP range is used by pgAdmin4 internally and if I can set up this exact same range in the pgpass file but I don't think this is possible.) Now, what if I'd like to connect to the default postgres maintenance database with the default postgres user on each?
127.0.0.1:*:postgres:postgres:postgres_password_a
127.0.0.1:*:postgres:postgres:postgres_password_b
127.0.0.1:*:postgres:postgres:postgres_password_c
127.0.0.1:*:postgres:postgres:postgres_password_d
How could I let pgAdmin4 knows which password to use with which server as all first four fields are now exactly the same? Setting the same password on all servers, reducing the pgpass file to this single line:
127.0.0.1:*:postgres:postgres:unique_postgres_password
is definitely not an option as I cannot change those {databasenames:usernames:passwords} myself. The local ports being randomly assigned, I guess I'll have to play with the local IP but I'm not successful for the moment as it seems to be automatically set to 127.0.0.1. This latter case is the most extreme one, but one can easily imagine the same issue with two different databases; a pre-production and a production one, having both the same {database:username} but different passwords. Doc: https://www.postgresql.org/docs/14/libpq-pgpass.html
s.k (424 rep)
Feb 16, 2022, 03:54 PM • Last activity: May 31, 2025, 06:06 PM
0 votes
1 answers
253 views
MariaDB writes more the 30GB/day temp data
I&#180;ve a Synology NAS where I&#180;m running a Nextcloud in a Docker Container and a MariaDB in an other Docker Container. On a day the the NAS and also the Nextcloud doesn&#180;t do anything, on my SSD are over 30GBs written. So I&#180;ve looking for a while and saw with *iotp*, that every time...
I´ve a Synology NAS where I´m running a Nextcloud in a Docker Container and a MariaDB in an other Docker Container. On a day the the NAS and also the Nextcloud doesn´t do anything, on my SSD are over 30GBs written. So I´ve looking for a while and saw with *iotp*, that every time when the Nextcloud cron job is running, the MariaDB writes with over 20MB/s to my SSD. I tried to prevent it by using an ramdisk like here written: https://dba.stackexchange.com/questions/30505/why-does-mysql-produce-so-many-temporary-myd-files/30635#30635 For this I created the MariaDB Docker Container with
docker container run -d \
--mount type=tmpfs,destination=/tmp,tmpfs-size=1g,tmpfs-mode=1777 \
....
but when the Nextcloud cron job is running, I still see that the Mariadb writes with over 20MB/s to my SSD :( In the Container it looks fine:
df -h
Filesystem         Size  Used Avail Use% Mounted on
tmpfs              1.0G   13M 1012M   2% /tmp
with a script I´ve found witch uses *inotifywait* I see hat during the Nextcloud cron job a “/tmp//#sql-temptable-1-575-0.MAI” is created. I´ve also looked with
touch test
=> run the cron job
find / -newer test | grep -v "^/sys/" | grep -v "^/proc/"
if other things where wirtten, but it doen´t seems so:
/dev/pts/3
/dev/pts/ptmx
find: ‘/proc/1470/task/1470/fd/6’: No such file or directory
find: ‘/proc/1470/task/1470/fdinfo/6’: No such file or directory
find: ‘/proc/1470/fd/5’: No such file or directory
find: ‘/proc/1470/fdinfo/5’: No such file or directory
/tmp
/var/lib/mysql/ib_logfile0
Does anyone has an idea why the ramdisk not working or I can see what is written by the MariaDB and prevent it?
Holger (1 rep)
Aug 22, 2023, 07:33 AM • Last activity: May 29, 2025, 02:03 PM
1 votes
1 answers
277 views
[[FATAL]] [[DBT-06103]] The port (5,500) is already in use in Docker Container
I'm trying to use Oracle 19c DB Container for my test cases in Jenkins Job but DB Container is intermittently failing with this error ``` [FATAL] [DBT-06103] The port (5,500) is already in use. ACTION: Specify a free port. ``` I referred this answer https://dba.stackexchange.com/questions/268437/fat...
I'm trying to use Oracle 19c DB Container for my test cases in Jenkins Job but DB Container is intermittently failing with this error
[FATAL] [DBT-06103] The port (5,500) is already in use.
   ACTION: Specify a free port.
I referred this answer https://dba.stackexchange.com/questions/268437/fatal-dbt-06103-the-port-5-500-is-already-in-use?newreg=335276877b8c4859a5fd41263b960125 but how to achieve this in container where hostname is always dynamic.
Vishwanath Joshi (11 rep)
Apr 25, 2023, 06:04 PM • Last activity: May 25, 2025, 12:03 PM
2 votes
1 answers
280 views
How do I backup data of Cassandra nodes running in Docker?
I hope to receive some tips and assistance. I have three virtual machines running Ubuntu 22.04, and each one has Cassandra deployed via docker-compose, forming a cluster. My current task is to organize an automatic backup of all keyspaces and schema to a remote server. Has anyone already solved a si...
I hope to receive some tips and assistance. I have three virtual machines running Ubuntu 22.04, and each one has Cassandra deployed via docker-compose, forming a cluster. My current task is to organize an automatic backup of all keyspaces and schema to a remote server. Has anyone already solved a similar problem? I cannot find any official information on this matter. I currently see it in the following form, but I'm not sure if it's efficient:
docker exec -it container_id /bin/bash
    nodetool snapshot
    docker cp container_id:/data/keyspace_name/table_name-UUID/snapshots/snapshot_name /path/on/host/
Dmitriy (21 rep)
Apr 26, 2023, 03:15 AM • Last activity: May 15, 2025, 06:01 PM
0 votes
1 answers
4267 views
SQLSTATE[HY000] [2002] Connection refused when trying to initiate connection via Dockerfile's CMD
I'm trying to run a Docker container based on: - PHP 8.1 - Apache 2.4 - MariaDB (latest official docker image) Dockerfile: ``` FROM php:8.1-apache WORKDIR /var/www/html/ RUN pecl install xdebug \ && apt update \ && apt install libzip-dev -y \ && docker-php-ext-enable xdebug \ && a2enmod rewrite \ &&...
I'm trying to run a Docker container based on: - PHP 8.1 - Apache 2.4 - MariaDB (latest official docker image) Dockerfile:
FROM php:8.1-apache

WORKDIR /var/www/html/

RUN pecl install xdebug \
    && apt update \
    && apt install libzip-dev -y \
    && docker-php-ext-enable xdebug \
    && a2enmod rewrite \
    && docker-php-ext-install zip \
    && rm -rf /var/lib/apt/lists/* \
    && docker-php-ext-install pdo pdo_mysql

COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
COPY composer.json .

RUN groupadd -r user && useradd -r -g user user
USER user
RUN composer install --no-dev

COPY . .

EXPOSE 80

CMD [ "sh", "-c", "php src/init.php" ]
docker-compose.yml:
services:

  php:
    build: ./php
    depends_on:
      - db
      - adminer
    container_name: php-apache
    ports:
      - 80:80
    volumes:
      # setup xdebug to be able to use PHP step debugger, if needed
      - ./php/conf.d/xdebug.ini:/usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
      - ./php/conf.d/error_reporting.ini:/usr/local/etc/php/conf.d/error_reporting.ini
      # apache config (server name)
      - ./apache/apache2.conf:/etc/apache2/apache2.conf
      # apache config (rewrite rule to reroute all requests to unknown resources through to REST controller)
      - ./apache/000-default.conf:/etc/apache2/sites-enabled/000-default.conf
      # Source code
      - ./php/src:/var/www/html/src
      # unbind local composer components
      - /php/vendor
      - /php/composer.lock
      - /php/composer.phar
    environment:
      MARIADB_HOST: db
      MARIADB_USER: root
      MARIADB_PASSWORD: top_very_secret
      MARIADB_DB: apidb

  adminer:
    image: adminer
    depends_on:
      - db
    restart: always
    ports:
      - 8080:8080

  db:
    image: mariadb
    container_name: db
    volumes:
      - maria-db-storage:/var/lib/mysql
    environment:
      MARIADB_ROOT_PASSWORD: top_very_secret
      MARIADB_DATABASE: apidb
    ports:
      - 3306:3306

volumes:
  maria-db-storage:
src/init.php connects with the DB and creates tables the application needs, if they do not already exist. To connect, I use the environment variables like so:
new PDO(
    "mysql:host={$_ENV['MARIADB_HOST']};dbname={$_ENV['MARIADB_DB']}",
    $_ENV['MARIADB_USER'],
    $_ENV['MARIADB_PASSWORD'],
    [
        PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
        PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC
    ]
);
And for the first time I docker-compose up, I always get: SQLSTATE[HY000] Connection refused. At the top of the src/init.php script, I can successfully echo out all of the environment variables used in Docker. So they are all defined. I'm really confused with this, as docker-compose up and the DB connection only work after failing on the first time docker-compose up fails (with the above-mentioned error). When then calling docker-compose down, then do docker-compose up again in the subsequent times, things all work flawlessly. Any idea why this happens? I've been digging a little further, and I noticed that, wehn the mariadb image is built (e.g. when you first run docker-compose up with the above mentioned file contents), I get the above-mentioned error plus the output: PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER So it seems that, at the moment of the image being built; the MARIADB_ROOT_PASSWORD is for some reason not applied, but aftewards it is, and that that's the reason for the failed connection? Currently, I suppose that this is due to [this](https://hub.docker.com/_/mariadb) ?: > If there is no database initialized when the container starts, then a default database will be created. While this is the expected behavior, this means that it will not accept incoming connections until such initialization completes. This may cause issues when using automation tools, such as docker-compose, which start several containers simultaneously.
DevelJoe (163 rep)
Nov 16, 2022, 08:19 PM • Last activity: May 13, 2025, 05:00 AM
0 votes
1 answers
419 views
How can I create a sql dump from postgres that I can restore on a new server with no psql options?
I want to have an easy, repeatable process of upgrading a dockerized postgres database to a new version. Minor and patch versions are easy -- the container can simply be upgraded to the new version, and postgres will read the existing data directory without issues. Major versions are harder because...
I want to have an easy, repeatable process of upgrading a dockerized postgres database to a new version. Minor and patch versions are easy -- the container can simply be upgraded to the new version, and postgres will read the existing data directory without issues. Major versions are harder because the data directory format is not compatible between versions. The versions I am working with at present are 11 (old) and 14 (new). I don't think I can use pg_upgrade because from within an off-the-shelf container, I won't have access to the binary dirs for both versions, so I'm going with the backup-restore approach. On my new version's container, I want to take advantage of the /docker-entrypoint-initdb.d/ feature of many dockerhub images, where one can mount a set of script files into that directory from the host, and they will be executed the first time the container is run. So, if a db_dump.sql file, for example, is mounted into that directory, the container will run it with psql -f the first time it is run, and the contents of that script can be used to populate the data directory of the new, upgraded container. The benefit of this would be that I don't have to do any manual fuddling with the upgraded container -- it will simply restore the dump when it launches. My issue has been in generating a dump that will restore in the new container without failing errors. If I use pg_dumpall from my old version, the script file includes CREATE ROLE postgres; statements which fail because the postgres role already exists in a fresh container. If I use pg_dumpall --clean, the script includes DROP DATABASE statements for the user databases which fail because those databases (obviously) don't exist in the fresh container. I did find this answer: https://dba.stackexchange.com/questions/75033/how-to-restore-everything-including-postgres-role-from-pg-dumpall-backup ... which suggests to me that sed-ing the output of pg_dumpall to remove alterations to the postgres role might be the least worst option, but this feels like I must be missing a much better option.
NReilingh (785 rep)
Dec 7, 2022, 04:34 PM • Last activity: May 8, 2025, 10:04 PM
Showing page 1 of 20 total questions