Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

13 votes
7 answers
21645 views
Extracting a field from RESTORE HEADERONLY
I'm trying to use '[RESTORE HEADERONLY][1]' to get the date when the backup I'm about to restore was made. The command: RESTORE HEADERONLY FROM DISK = ' ' works fine in Query Analyzer and gives a resultset with something like 50 columns. The problem is actually accessing this from code. I can get th...
I'm trying to use 'RESTORE HEADERONLY ' to get the date when the backup I'm about to restore was made. The command: RESTORE HEADERONLY FROM DISK = '' works fine in Query Analyzer and gives a resultset with something like 50 columns. The problem is actually accessing this from code. I can get this into a temp table by declaring every single one of the 50:ish columns, inserting into it with exec and getting the value I want from there. The problem is that I really want to avoid having to declare the entire resultset as a temp table as it seems like a very brittle solution if they ever add columns to it in future versions. Is there any way to just get a single column out of this resultset without declaring all the columns?
alun (233 rep)
Feb 8, 2012, 07:52 AM • Last activity: Aug 6, 2025, 10:44 AM
0 votes
1 answers
67 views
Cannot import a database dump on Postgres 13.14+ while it loads fine in Postgres 13.13
I'm experiencing a problem with loading a PostgreSQL backup file (SQL format). The SQL file has a function that is defined after another function where it's used. PostgreSQL 13.13 can handle such a backup file, while PostgreSQL 13.14 fails to load it: ``` ERROR: function public.label_id_constant() d...
I'm experiencing a problem with loading a PostgreSQL backup file (SQL format). The SQL file has a function that is defined after another function where it's used. PostgreSQL 13.13 can handle such a backup file, while PostgreSQL 13.14 fails to load it:
ERROR:  function public.label_id_constant() does not exist
LINE 1:  SELECT public.uuid_increment($1, public.label_id_constant()...
                                          ^
HINT:  No function matches the given name and argument types. You might need to add explicit type casts.
QUERY:   SELECT public.uuid_increment($1, public.label_id_constant()) 
CONTEXT:  SQL function "label_id" during inlining
I've double-checked if there is SET check_function_bodies = false; in the dump file. I also searched if I could disable the inlining during the dump load, but still no success. I've distilled the dump file into a minimal reproducible example and attached it as a script to this ticket. If anybody experienced anything similar, please help.
#!/bin/env bash

DUMP_FILE=$(mktemp)
trap "rm -f $DUMP_FILE" EXIT

cat - > "$DUMP_FILE" &2

docker run -d \
       --name postgres-13.13 \
       -e POSTGRES_HOST_AUTH_METHOD=trust \
       -p 5432:5432 \
       postgres:13.13


echo "Waiting for postgres to start" >&2
while ! docker exec postgres-13.13 pg_isready -h localhost -U postgres -q; do
    sleep 1
done

cat "$DUMP_FILE" | psql -h localhost -U postgres -v ON_ERROR_STOP=1 --port 5432 -e -1 && echo "******** Success ********" || echo "******** Failure ********"


docker stop postgres-13.13
docker rm postgres-13.13

echo "Testing with postgres 13.14" >&2

docker run -d \
       --name postgres-13.14 \
       -e POSTGRES_HOST_AUTH_METHOD=trust \
       -p 5432:5432 \
       postgres:13.14

echo "Waiting for postgres to start" >&2
while ! docker exec postgres-13.14 pg_isready -h localhost -U postgres -q; do
    sleep 1
done

cat "$DUMP_FILE" | psql -h localhost -U postgres -v ON_ERROR_STOP=1 --port 5432 -e -1 && echo "******** Success ********" || echo "******** Failure ********"

docker stop postgres-13.14
docker rm postgres-13.14
-------- UPD: What I've already tried: Setting SET jit = off; doesn't fix the problem. UPD2: 1. I tried exporting our database using pg_dump, instead of the CloudSQL export API. It gave me the same error. 2. I tried to export the database, load it to 13.13, then export it from 13.13 and load it to 13.14, but the error was the same again. --- UPD: I successfully migrated the DB with the following script: https://paste.ubuntu.com/p/kgGGQzNcgp/ After migrating to PostgreSQL 17.5, the issue persists. If I dump the DB with pg_dump, I cannot load it with the same error.
Renat (101 rep)
Jan 21, 2025, 08:12 AM • Last activity: Aug 5, 2025, 01:46 PM
0 votes
1 answers
727 views
Restore backup global using pgadmin
I created backup global by rightclicking Postgres 15 in pgadmin [![enter image description here][1]][1] It generates `.sql` file. How do i restore the backup global in pgadmin ? I tried searching on google but didnt found the answer [1]: https://i.sstatic.net/9FAZb.png
I created backup global by rightclicking Postgres 15 in pgadmin enter image description here It generates .sql file. How do i restore the backup global in pgadmin ? I tried searching on google but didnt found the answer
LLL (101 rep)
Jun 21, 2023, 10:35 AM • Last activity: Aug 5, 2025, 05:01 AM
0 votes
1 answers
149 views
Check data is the same after restore SQL
I've backed up, transferred to another location and restored a database. Is there anyway to check that the data is the exact same? Would I be best using the CHECKSUM and then the VERIFYONLY method? Thanks.
I've backed up, transferred to another location and restored a database. Is there anyway to check that the data is the exact same? Would I be best using the CHECKSUM and then the VERIFYONLY method? Thanks.
Ryan (55 rep)
Oct 8, 2015, 10:22 AM • Last activity: Jul 31, 2025, 04:05 PM
19 votes
7 answers
71951 views
Error while restoring a Database from an SQL dump
I am extremely new to MySQL and am running it on Windows. I am trying to restore a Database from a dumpfile in MySQL, but I get the following error: $ >mysql -u root -p -h localhost -D database -o mysql -u root -p -h localhost -D database --binary-mode -o < dump.sql` but this gave me the following `...
I am extremely new to MySQL and am running it on Windows. I am trying to restore a Database from a dumpfile in MySQL, but I get the following error: $ >mysql -u root -p -h localhost -D database -o mysql -u root -p -h localhost -D database --binary-mode -o < dump.sql but this gave me the following ERROR at line 1: Unknown command '\☻'.` It is a 500 Mb dump file, and when I view its contents using gVIM, all I can see is expressions and data which is not comprehensible. Also when I try to copy contents from the file to post here all I can copy is :SQLite format 3 This kind of seems strange.
user1434997 (291 rep)
Jun 18, 2013, 12:05 AM • Last activity: Jul 30, 2025, 02:05 PM
2 votes
1 answers
5416 views
How to restore database from /var/lib/mysql
I have a database folder from a crashed server called gain_wordpress. I'm trying to put that database on a new server but I'm not able to do it successfully. I don't have a .sql dump, all I have is the database folder from a past crashed server with ibdata and the 2 log files. I copied the gain_word...
I have a database folder from a crashed server called gain_wordpress. I'm trying to put that database on a new server but I'm not able to do it successfully. I don't have a .sql dump, all I have is the database folder from a past crashed server with ibdata and the 2 log files. I copied the gain_wordpress folder to /var/lib/mysql on the new server but when I do a > show tables on the database I get the following error: ERROR 1018 (HY000): Can't read dir of './gain_wordpress/' (errno: 13) I also changed the permission of the folder and did a chown mysql:mysql gain_wordpress -R and it still didn't take care of the problem. What is the best way to recover gain_wordpress. Do I need to replace any of the ibdata or the logfiles? What is the best way? I have the corresponding .frm, .MYD and .MYI files for each table. Please advise.
unixmiah (156 rep)
Mar 27, 2015, 02:58 PM • Last activity: Jul 29, 2025, 10:10 PM
0 votes
1 answers
980 views
Oracle database 12C restore script
i'm trying to create backup restore script . in short my backup script creating new directory every day from the operating system level and then backup the database using the directory created i tried the below script but it seems RMAN doesn't recognize the controlfiles here is my script **#RMAN RES...
i'm trying to create backup restore script . in short my backup script creating new directory every day from the operating system level and then backup the database using the directory created i tried the below script but it seems RMAN doesn't recognize the controlfiles here is my script **#RMAN RESTORE SCRIPT** #!/bin/bash #Environment Variables ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome_1 ORACLE_SID=cdb11 PATH=$PATH:$ORACLE_HOME/bin DD=date +%Y-%m-%d-%H-%M export ORACLE_HOME ORACLE_SID PATH DD controlfile backupsets controlfile=/ArchiveDBBackup/DAILY_RMAN/${DD}/CDB1_CONTROLFILE backupsets=/ArchiveDBBackup/DAILY_RMAN/${DD}/ export controlfile backupsets LOGFILE=/RMAN/log/DAILY_FULLBKP_RESTORE_date +%Y%m%d.log rman target / LOG $LOGFILE RMAN-03002: failure of restore command at 07/04/2021 09:35:57 RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece any idea how can RMAN recognize linux variables ? or anyway to achieve the restore with those variables ?
Seif Hassan (11 rep)
Jul 4, 2021, 08:51 AM • Last activity: Jul 29, 2025, 02:02 AM
0 votes
1 answers
2855 views
ORA-31626: job does not exist during schema export
I'm exporting schema with following command - expdp system/system@xxx.xxx.xxx.xxx:1521/orcl schemas=schema-name directory=test_dir dumpfile=Schema.dmp logfile=SchemaLog.log but it results into following error - Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production ORA-31626: job do...
I'm exporting schema with following command - expdp system/system@xxx.xxx.xxx.xxx:1521/orcl schemas=schema-name directory=test_dir dumpfile=Schema.dmp logfile=SchemaLog.log but it results into following error - Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production ORA-31626: job does not exist ORA-04063: package body "SYS.DBMS_INTERNAL_LOGSTDBY" has errors ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_INTERNAL_LOGSTDBY" ORA-06512: at "SYS.KUPV$FT", line 1009 ORA-04063: package body "SYS.DBMS_LOGREP_UTIL" has errors ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_LOGREP_UTIL" I googled a lot and tried solutions provided around ORA-31626: job does not exist and ORA-04063: package body "SYS.DBMS_INTERNAL_LOGSTDBY" has errors but none helped to solve the problem. Could you please help to resolve this?
Alpha (153 rep)
Dec 30, 2020, 08:58 AM • Last activity: Jul 27, 2025, 12:06 PM
1 votes
1 answers
56 views
mariadb docker fails to Initialize database contents from dump file
I have a Nextcloud server set up that uses docker image mariadb:10.11. I use mysqldump to back up the database. Where I am having trouble is trying to restore that to a test system as described here https://hub.docker.com/_/mariadb#initializing-the-database-contents. For the restore I added the extr...
I have a Nextcloud server set up that uses docker image mariadb:10.11. I use mysqldump to back up the database. Where I am having trouble is trying to restore that to a test system as described here https://hub.docker.com/_/mariadb#initializing-the-database-contents . For the restore I added the extra service "adminer" so I could check it out. I also added a new volume to mount the dump file - /data/blue/nextcloud/db_backup/db_dump_nextcloud.sql:/docker-entrypoint-initdb.d/db_dump_nextcloud.sql. I then started it with docker-compose up -d adminer. On the first try I saw that there were 52 (I think) tables in the nextcloud database so I thought we was good, but when starting nextcloud it was giving me errors about missing tables. A few days later I got around to trying it again and now it only gets part of one table downloaded. The last time or two I tested by deleting the database volume and then just start the database with docker-compose up db. I think in the logs I am seeing the data be dumped into one of the tables and then the Lost connection error: ... db-1 | ('3507453_0','3507453_0',1,3507453,'�\0\0\0\0\0\0\0\0;\0\0\0 c�ZB�J@K�4�T@ �c�J@�]K��T@�>W[��J@�H�}�T@��k ��J@�A�f�T@33333�J@?�ܵ��T@�1��%�J@z�):��T@���<,�J@;pΈ��T@1�*▒��J@�,C��T@/�$��J@��b��T@����J@����M�T@��ڊ��J@b��4��T@+�٦J@�ZӼ��T@|a2U�J@�Q�@��{���J@�9#J{�T@�HP��J@��g���T@4��7��J@�x�&1�T@�����J@��k ��T@�C�l��J@�c�]K�T@gDio��J@2U0*��T@-C���J@_�Q�T@d�]KȯJ@��|г�T@ףp=\n�J@M��St�T@�3���J@�a2�T@-����J@�N@a�T@Ԛ���J@��C�l�T@Gr���J@K�=�U@P�▒sײJ@���oU@\'�W�J@�HPU@6<�R��J@�;NU@���{��J@�E��U@������J@k+���U@�2ı.�J@ޓ��ZU@�A�f�J@�p=\n�U@���ZӬJ@ףp=\nU@�rh���J@��C�lU@ݵ�|ЫJ@�c]�FU@��C��J@�ܵ�|U@�$���J@$(~��U@� ��J@Gr��U@���~��J@▒&S�U@�d�`T�J@t���U@�o_ΩJ@\\���(U@�&S�J@8��d�U@▒&S��J@=\nףpU@�W�2ĩJ@���QIU@S��:�J@���<,U@ףp=\n�J@S�!�uU@���H�J@S�!�uU@�٬�\\�J@?�ܵU@�e�c]�J@ +�U�Zd�J@��|?5U@�z�G�J@A�c�]�T@�HP��J@:#J{��T@�����J@F�����T@����J@?5^�I�T@�����J@�K7�A�T@��DؠJ@$�����T@x $(�J@-!�l�T@ c�ZB�J@K�4�T@') db-1 | -------------- db-1 | db-1 | ERROR 2013 (HY000) at line 316271: Lost connection to server during query db-1 | /usr/local/bin/docker-entrypoint.sh: line 298: 88 Segmentation fault (core dumped) "$@" --skip-networking --default-time-zone=SYSTEM --socket="${SOCKET}" --wsrep_on=OFF --expire-logs-days=0 --skip-slave-start --loose-innodb_buffer_pool_load_at_startup=0 db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Last binlog file './binlog.000001', position 272159596 db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: 128 rollback segments are active. db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Starting in background the rollback of recovered transactions db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Rollback of non-prepared transactions completed db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: log sequence number 206149883; transaction id 298 db-1 | 2025-07-22 2:31:34 0 [Note] Plugin 'FEEDBACK' is disabled. db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Cannot open '/var/lib/mysql/ib_buffer_pool' for reading: No such file or directory db-1 | 2025-07-22 2:31:34 0 [Note] Recovering after a crash using binlog db-1 | 2025-07-22 2:31:34 0 [Note] Starting table crash recovery... db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Starting recovery for XA transactions... db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Transaction 295 in prepared state after recovery db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: Transaction contains changes to 3086 rows db-1 | 2025-07-22 2:31:34 0 [Note] InnoDB: 1 transactions in prepared state after recovery db-1 | 2025-07-22 2:31:34 0 [Note] Found 1 prepared transaction(s) in InnoDB db-1 | 2025-07-22 2:31:34 0 [Note] Crash table recovery finished. db-1 | 2025-07-22 2:31:34 0 [Note] Server socket created on IP: '0.0.0.0'. db-1 | 2025-07-22 2:31:34 0 [Note] Server socket created on IP: '::'. db-1 | 2025-07-22 2:31:34 0 [Note] mariadbd: ready for connections. db-1 | Version: '10.11.13-MariaDB-ubu2204-log' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution w Enable Watch The relevant part of my compose file is as follows. The db service is set up the same as on the source container except the new volume. services: adminer: image: adminer restart: always ports: - 8080:8080 depends_on: - db db: image: mariadb:10.11 command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW --log_bin_trust_function_creators=1 restart: always volumes: - db:/var/lib/mysql # For mysqldump conf file for backups. - /home/monterey/confg/restic/db-env.cnf:/app/db-env.cnf:ro - /data/blue/nextcloud/db_backup/db_dump_nextcloud.sql:/docker-entrypoint-initdb.d/db_dump_nextcloud.sql environment: - MARIADB_AUTO_UPGRADE=1 - MARIADB_DISABLE_UPGRADE_BACKUP=1 env_file: - db.env redis: image: redis:alpine restart: always app: #image: nextcloud:apache build: ./nextcloud restart: always volumes: ... Edit - I did a --no-data dump and was able to restore that. It gave me a 195 tables and took a little over a minute. I did not see it dumping out ever line of sql so I wonder if when it was doing that before it was trying to say there was an error.
JohnT (11 rep)
Jul 22, 2025, 02:45 AM • Last activity: Jul 22, 2025, 03:46 PM
1 votes
2 answers
6615 views
Restore Postgres 9.0 backup into Postgres 9.3
I'm having a problem restoring my Postgres 9.0 database into a 9.3 version. I did the backup with this command: pg_dump -U user database -f /root/mydb.sql And when I send the backup to the new server and try to restore it I get this error: psql.bin:/root/mydb.sql:5628157: invalid command \n psql.bin...
I'm having a problem restoring my Postgres 9.0 database into a 9.3 version. I did the backup with this command: pg_dump -U user database -f /root/mydb.sql And when I send the backup to the new server and try to restore it I get this error: psql.bin:/root/mydb.sql:5628157: invalid command \n psql.bin:/root/mydb.sql:5628158: invalid command \n psql.bin:/root/mydb.sql:5628159: invalid command \n psql.bin:/root/mydb.sql:5628160: invalid command \n psql.bin:/root/mydb.sql:5628161: invalid command \n psql.bin:/root/mydb.sql:5628162: invalid command \n psql.bin:/root/mydb.sql:5628163: invalid command \n psql.bin:/root/mydb.sql:5628164: invalid command \n psql.bin:/root/mydb.sql:5628165: invalid command \n psql.bin:/root/mydb.sql:5628166: ERROR: syntax error at or near "PageRange" LINE 1: PageRange=1-2highFoto_ALTA
jask (113 rep)
Aug 26, 2016, 02:05 PM • Last activity: Jul 22, 2025, 10:22 AM
0 votes
1 answers
1522 views
End of file reached. Error 38 in SQL. Passes verification checks
Having issues restoring a database in SSMS, error 38. The databases are transferred up to google via their Drive API. Download it on my side and has error. If I transfer it through teamviewer from client PC I still get the same error. Only happening on two clients the rest are fine. I have tried bac...
Having issues restoring a database in SSMS, error 38. The databases are transferred up to google via their Drive API. Download it on my side and has error. If I transfer it through teamviewer from client PC I still get the same error. Only happening on two clients the rest are fine. I have tried backing up with FORMAT options and WITH CHECKSUM. Both succeed and backup is taking when verifying. I have ran restore fileheaders only and get proper data except the physicalname path has a lowercase 'c' for the drive on the problem databases. I have read this could be a corrupt database but I don't understand how the checksum and verify sql functions pass if so. Any insight would really help. This is the backup command used in C# "BACKUP DATABASE " + database + " TO DISK = '" + database + "_" + dateTime.ToString("yyyy_MM_dd_HH") + "_CloudBackup.bak' WITH NAME = 'database', INIT, CHECKSUM" EDIT: Running
check
on the database produced no errors. I have tried updated the physical name of the .mdf and .ldf it does not work still. Taking a backup from SSMS produces a valid backup file. It has something to do with the services I wrote. EDIT 2: I am restoring through SSMS GUI. I have also tried
RESTORE DATABASE db_name FROM 'backup_path'
Commands
RESTORE VERIFYONLY FROM DISK = 'backup_path'
On both computers, mine and the server PC yields "The backup set on file 1 is valid."
RESTORE FILELISTONLY FROM DISK = 'backup_path'
Have only test on my PC. Returns paths to mdf and ldf, no errors. Both PCs have sql server 2012 (SP1)
Austen Swanson (1 rep)
Jul 23, 2019, 08:49 PM • Last activity: Jul 20, 2025, 08:02 AM
1 votes
1 answers
149 views
Copy postgis layer from S3 to Heroku
I have a dump of a postgis layer (layer.dump), which I am trying to add to my heroku database (mydatabase). The postgis layer is stored on S3 (https://s3.amazonaws.com/layer.dump). I would like to add the layer to to the heroku database and previously used `heroku pgbackups:restore DATABASE 'https:/...
I have a dump of a postgis layer (layer.dump), which I am trying to add to my heroku database (mydatabase). The postgis layer is stored on S3 (https://s3.amazonaws.com/layer.dump) . I would like to add the layer to to the heroku database and previously used heroku pgbackups:restore DATABASE 'https://s3.amazonaws.com/layer.dump '. However, the new heroku pg:backups restore 'https://s3.amazonaws.com/layer.dump ' DATABASE deletes all data from the target database before restoring the backup (https://devcenter.heroku.com/articles/heroku-postgres-backups) . Is there still a way to only restore a single table and leave the remaining tables in the database untouched?
Anne (143 rep)
Sep 2, 2015, 06:44 PM • Last activity: Jul 18, 2025, 07:06 PM
1 votes
1 answers
166 views
Restore of Database with CDC Failing
I'm taking a backup of a SQL Server 2019 database with CDC enabled, and trying to restore it to a different instance. I don't need CDC on target database so I don't have the "keep_cdc" option in the restore script. Despite this, I'm getting the following error: > Could not upgrade the metadata for d...
I'm taking a backup of a SQL Server 2019 database with CDC enabled, and trying to restore it to a different instance. I don't need CDC on target database so I don't have the "keep_cdc" option in the restore script. Despite this, I'm getting the following error: > Could not upgrade the metadata for database mydbname that is enabled > for Change Data Capture. The failure occurred when executing the > action 'drop function fn_cdc_get_all_changes_'. The > error returned was 3764: 'line 1, state 1, Cannot drop the table > valued function > 'cdc.fn_cdc_get_all_changes_dbo_mytablename' because it is > being used for Change Data Capture.'. Use the action and error to > determine the cause of the failure and resubmit the request. The database seems to restore OK despite the error, but CDC is enabled when I look in sys.databases. When I try to disable cdc on the restored database I get following error: > Could not update the metadata that indicates database mydbname is not enabled for Change Data Capture. The failure occurred when executing the command 'DROP TABLE dbo.systranschemas'. The error returned was 3764: 'Cannot drop the table 'dbo.systranschemas' because it is being used for Change Data Capture.'. Use the action and error to determine the cause of the failure and resubmit the request. Any thoughts as to why this is happening despite the fact that I'm not including keep_CDC in the restore script? Any potential solutions?
Brian Layden (11 rep)
Feb 14, 2025, 04:45 PM • Last activity: Jul 18, 2025, 05:07 AM
0 votes
0 answers
53 views
How do I restore my data from hacked database
My database was just hacked. I tried doing some work and could not log in with the created user credentials. When I checked the database, I saw an intruder was removing the users and had created a replication_user, I tried removing them, and before I could run a backup, the intruder had changed the...
My database was just hacked. I tried doing some work and could not log in with the created user credentials. When I checked the database, I saw an intruder was removing the users and had created a replication_user, I tried removing them, and before I could run a backup, the intruder had changed the root password. I have stopped the server, the MySQL server is running in a Docker container, and I have backed up the volume containing all the files, including those in var/lib/mysql. Please, how do I restore it into a new database? The last full backup was from a few days ago
Aijay (1 rep)
Jul 17, 2025, 06:22 PM
1 votes
1 answers
494 views
SQLServer Linux: Error restoring backup of DB from Windows w/ full-text data file
I'm trying to move a Windows SQL Server database from Windows 10 to Linux. For this, I'm following the instructions in https://learn.microsoft.com/es-es/sql/linux/sql-server-linux-migrate-restore-database?view=sql-server-linux-2017 The linux database is freshly installed in an Ubuntu 16.04.4 LTS. Th...
I'm trying to move a Windows SQL Server database from Windows 10 to Linux. For this, I'm following the instructions in https://learn.microsoft.com/es-es/sql/linux/sql-server-linux-migrate-restore-database?view=sql-server-linux-2017 The linux database is freshly installed in an Ubuntu 16.04.4 LTS. The Windows database backup is a previously existing (I have not executed the backup, but it's a full backup) When I try to restore, it generates an error in the catalog database (access denied), as shown (database name changed to 'mydb' for privacy): sqlcmd -S localhost -U SA -Q "RESTORE DATABASE mydb FROM DISK = '/var/opt/mssql/backup/mydb_backup_201804300000.bak' WITH MOVE 'mydb' TO '/var/opt/mssql/data/mydb.mdf', MOVE 'mydb_log' TO '/var/opt/mssql/data/mydb_log.ldf', MOVE 'sysft_appuser_catalog3' TO '/var/opt/mssql/data/catalog.ft'" Msg 7610, Level 16, State 1, Server irulan, Line 1 Acceso denegado a '/var/opt/mssql/data/catalog.ft' o la ruta de acceso no es válida. Msg 3156, Level 16, State 50, Server irulan, Line 1 El archivo 'sysft_appuser_catalog3' no se puede restaurar en '/var/opt/mssql/data/catalog.ft'. Utilice WITH MOVE para identificar una ubicación válida para el archivo. The other 2 files (mdf and ldf) are created without problems in the same folder). I have tried with different file names, creating previously (touch) the file, and so on with no success. How can I restore this database? I'd be willing to restore it without the full-text index - is there a way to do that? This is the output of FILELISTONLY (to check the content of backup) LogicalName PhysicalName Type FileGroupName Size MaxSize FileId CreateLSN DropLSN UniqueId ReadOnlyLSN ReadWriteLSN BackupSizeInBytes SourceBlockSize FileGroupId LogGroupGUID DifferentialBaseLSN DifferentialBaseGUID IsReadOnly IsPresent TDEThumbprint SnapshotUrl -------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---- -------------------------------------------------------------------------------------------------------------------------------- -------------------- -------------------- -------------------- --------------------------- --------------------------- ------------------------------------ --------------------------- --------------------------- -------------------- --------------- ----------- ------------------------------------ --------------------------- ------------------------------------ ---------- --------- ------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ mydb D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\mydb.mdf D PRIMARY 3460300800 35184372080640 1 0 0 D64B0490-3FF6-4EFE-A9A1-491B5993F3AF 0 0 2348613632 512 1 NULL 30094000017824000037 B7E468AB-78C2-4732-8D73-2F07E3ABAF9D 0 1 NULL NULL mydb_log D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\mydb_log.ldf L NULL 1540227072 2199023255552 2 0 0 A6B8CF28-C3D8-4B50-B030-4D5B14F82084 0 0 0 512 0 NULL 0 00000000-0000-0000-0000-000000000000 0 1 NULL NULL sysft_appuser_catalog3 D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\FTData\appuser_catalog3 F PRIMARY 931744 0 65539 17974000000690900001 0 0B0AEAB0-86A2-42ED-9B37-E70EE556383C 0 0 983040 512 1 NULL 30094000017824000037 B7E468AB-78C2-4732-8D73-2F07E3ABAF9D 0 1 NULL NULL (3 rows affected) Note: I have found this [Stack Overflow post](https://stackoverflow.com/questions/536182/mssql2005-restore-database-without-restoring-full-text-catalog) ; the poster had a different problem, but was also willing to restore the DB without the full-text data. It doesn't say how (if!) he ever resolved his problem, so it doesn't really give me an answer.
user155183
Jul 12, 2018, 07:00 PM • Last activity: Jul 15, 2025, 09:01 AM
0 votes
3 answers
194 views
How to restore snapshots to another table?
Someone accidentally deleted several records from Cassandra's table in the cluster. I want to restore the last snapshot to another copy table with the same structure within the cluster. What is the best way to do that? Cassandra v4.0.4
Someone accidentally deleted several records from Cassandra's table in the cluster. I want to restore the last snapshot to another copy table with the same structure within the cluster. What is the best way to do that? Cassandra v4.0.4
EdiM (51 rep)
May 13, 2024, 07:29 AM • Last activity: Jul 5, 2025, 02:02 PM
0 votes
1 answers
170 views
MySQL import silently failing
I'm trying to import a large (30GB) SQL file (generated using mysqldump) into an RDS instance MySQL 5.7 database, from an EC2 instance on the same VPC. It's importing most without a problem, but missing around 5 tables from the end of the dump. Looking around for related logs in the AWS console I ca...
I'm trying to import a large (30GB) SQL file (generated using mysqldump) into an RDS instance MySQL 5.7 database, from an EC2 instance on the same VPC. It's importing most without a problem, but missing around 5 tables from the end of the dump. Looking around for related logs in the AWS console I can't really find much of use, all I'm seeing is this:
2021-02-17T12:16:20.168722Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 10360ms. The settings might not be optimal. (flushed=599 and evicted=179, during the time.)
2021-02-17T12:16:36.461195Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 5988ms. The settings might not be optimal. (flushed=490 and evicted=124, during the time.)
2021-02-17T12:16:46.848239Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 6766ms. The settings might not be optimal. (flushed=447 and evicted=0, during the time.)
2021-02-17T12:17:16.371538Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 10380ms. The settings might not be optimal. (flushed=370 and evicted=240, during the time.)
2021-02-17T12:19:22.153786Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 27942ms. The settings might not be optimal. (flushed=299 and evicted=637, during the time.)
2021-02-17T12:21:19.368279Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 16509ms. The settings might not be optimal. (flushed=361 and evicted=136, during the time.)
2021-02-17T12:23:01.420623Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 5097ms. The settings might not be optimal. (flushed=349 and evicted=125, during the time.)
2021-02-17T12:23:16.836488Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 8285ms. The settings might not be optimal. (flushed=307 and evicted=121, during the time.)
2021-02-17T12:23:52.965471Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 19104ms. The settings might not be optimal. (flushed=299 and evicted=607, during the time.)
----------------------- END OF LOG ----------------------
I'm not really sure where to start looking without any logs. Only thing I can think of is maybe lack of memory, but looking at the stats it still had "some" memory left on the RDS instance while it was importing. When I check the file the CREATE TABLE statements are definitely there for the missing tables. Can anyone give any tips on how I can figure out what's going wrong?
BT643 (229 rep)
Feb 17, 2021, 12:33 PM • Last activity: Jul 5, 2025, 06:05 AM
2 votes
2 answers
4422 views
Mongodb _id Value and Indexes Are Lost During Mongorestore
Steps I used: 1. `mongodump` 2. copy dump folder to another computer (for each collection I get .bson and metadata files) 3. `mongorestore` myDumpFolder I noticed the indexes are lost and `_id` has new values after `mongorestore`. Can anyone share their restore experiences that restored both indexes...
Steps I used: 1. mongodump 2. copy dump folder to another computer (for each collection I get .bson and metadata files) 3. mongorestore myDumpFolder I noticed the indexes are lost and _id has new values after mongorestore. Can anyone share their restore experiences that restored both indexes and mongodb_id?
Will (121 rep)
May 29, 2018, 06:30 PM • Last activity: Jul 2, 2025, 10:44 PM
1 votes
1 answers
177 views
dbcc checkdb errors, or msg 601 when restoring SQL 2012 backup to SQL 2016
I need to move two databases, WES and WILL from a Win2012/SQL2012 instance to a Win2016/SQL2016 instance for a software upgrade. I'm using the full backup files from the SQL2012 maintenance plans. I've verified I can restore them to the original SQL2012 instance with no errors. Restoring WES to SQL2...
I need to move two databases, WES and WILL from a Win2012/SQL2012 instance to a Win2016/SQL2016 instance for a software upgrade. I'm using the full backup files from the SQL2012 maintenance plans. I've verified I can restore them to the original SQL2012 instance with no errors. Restoring WES to SQL2016 reports no errors, but dbcc checkdb shows errors: >Msg 8939, Level 16, State 98, Line 13 Table error: Object ID 60, index ID 1, partition ID 281474980642816, alloc unit ID 281474980642816 (type In-row data), page (1:453). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 133129 and -4. ...etc Restoring WILL to SQL2016 stops partway through the upgrade steps with >Database 'WILL' running the upgrade step from version 805 to version 806. Msg 601, Level 12, State 3, Line 2 Could not continue scan with NOLOCK due to data movement. I thought the upgrade between versions was automatic. Is there anything else I need to do?
JeffH (11 rep)
Feb 25, 2022, 02:24 PM • Last activity: Jul 1, 2025, 09:07 PM
0 votes
3 answers
2450 views
The media family on device incorrectly formed for read only file group restore
Yes the databases (source and destination) are the same version. I have checked 3 times. Both are "Microsoft SQL Server 2019 (RTM-CU8-GDR) (KB4583459) - 15.0.4083.2 (X64)" The database itself restores ok without error. Only when I try the read-only file group restore does it give the error. I have r...
Yes the databases (source and destination) are the same version. I have checked 3 times. Both are "Microsoft SQL Server 2019 (RTM-CU8-GDR) (KB4583459) - 15.0.4083.2 (X64)" The database itself restores ok without error. Only when I try the read-only file group restore does it give the error. I have redone the back up and attempted the restore again just in case I had something wrong with the backup. I have done this numerous times with ss 2016 and even when upgrading from ss2016 to ss2019. But now I am testing the restores it just looks broken to me. Almost like this is a bug
Thostenor (1 rep)
Jan 19, 2021, 01:14 AM • Last activity: Jul 1, 2025, 08:02 PM
Showing page 1 of 20 total questions