Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
140 views
Huge Transaction Log File - Full Database backup, but no Transaction log backups
I hope someone can help. Please be gentle, this type of admin is not what I usually do. I have inherited a database which is set up to do a FULL database backup at 02:00 every day. The transaction log is huge (almost 1TB) and has never had any backups (hence it is growing). The database is 172GB. Th...
I hope someone can help. Please be gentle, this type of admin is not what I usually do. I have inherited a database which is set up to do a FULL database backup at 02:00 every day. The transaction log is huge (almost 1TB) and has never had any backups (hence it is growing). The database is 172GB. The Logs and database are on different drives, as are the backups. We would like to put a new maintenance plan in place with 1 FULL backup every week, 4 differential backups each day, and transaction log backups every 15 minutes. The size of the transaction log is worrying me. We don't have a drive with that space in order to back it up. Can I set the backup type to Simple, delete the transaction log, change back to Full, then set up the new jobs to do the differential and log file backups? Or will that break something? Is there a proper / better way to deal with this transaction log - I'm worried we will run out of space soon on the drive. Very happy to provide more info if needed. I wasn't sure what would be pertinent at this point. Thanks in advance.
Jubs666 (1 rep)
Jul 24, 2023, 02:10 PM • Last activity: Aug 7, 2025, 09:01 AM
0 votes
1 answers
483 views
LSN numbers query regarding Differential backups
We have the following backup regime in place which I am not 100% sure about. - Full backup each night at 18:00 taking approx. 6 hours - Transaction log backup upon completion of full generally about 10 mins. - Differential backups each hour taking approx. 20 mins. When querying the msdb.backupset ta...
We have the following backup regime in place which I am not 100% sure about. - Full backup each night at 18:00 taking approx. 6 hours - Transaction log backup upon completion of full generally about 10 mins. - Differential backups each hour taking approx. 20 mins. When querying the msdb.backupset table, I understand that the first_lsn will only be reset by a transaction log backup, but am slightly confused that the last_lsn entry is incremented each time the differential backup runs. This seems to imply that the differential backup also contains transaction log entries (which I don't believe it does). Is this entry in msdb.backupset just to give information to a restore to which transaction log backup to use to preserve the chain of transactions ? Finally, if we only have a transaction log backup once a night and differentials throughout the day, we are still looking at potentially a maximum of 24 hours data loss despite hourly differentials. Apologies if this seems like a lot of questions but trying to get this clear in my head. Thanks for listening.
zapcon (73 rep)
Jun 1, 2017, 11:16 AM • Last activity: Aug 7, 2025, 12:04 AM
0 votes
1 answers
727 views
Restore backup global using pgadmin
I created backup global by rightclicking Postgres 15 in pgadmin [![enter image description here][1]][1] It generates `.sql` file. How do i restore the backup global in pgadmin ? I tried searching on google but didnt found the answer [1]: https://i.sstatic.net/9FAZb.png
I created backup global by rightclicking Postgres 15 in pgadmin enter image description here It generates .sql file. How do i restore the backup global in pgadmin ? I tried searching on google but didnt found the answer
LLL (101 rep)
Jun 21, 2023, 10:35 AM • Last activity: Aug 5, 2025, 05:01 AM
0 votes
1 answers
142 views
How can I manually backup a large Azure Flexible Postgres database?
I have a large (3TB+) single database on a Azure Postgres Flexible Server. This is a managed service, so it takes backups automatically and the frequency can be adjusted. As a disaster recovery/emergency option, I'd like to periodically take a snapshot of the database separately from these increment...
I have a large (3TB+) single database on a Azure Postgres Flexible Server. This is a managed service, so it takes backups automatically and the frequency can be adjusted. As a disaster recovery/emergency option, I'd like to periodically take a snapshot of the database separately from these incremental backups and store it somewhere else. I don't need ongoing connectivity for incremental backups (current managed system does that), I just want a periodic backup that'd be restorable to a new server. I would just choose Azure's own Vaulted Backup option, which provides enough separation for this purpose. But according to the docs : >Vaulted backups are supported for server size <= 1 TB. If backup is configured on server size larger than 1 TB, the backup operation fails. So I'm looking for other options. I checked out Barman , which looks like it should be perfect. But the managed Postgres Flexible Server doesn't support ssh/rsync access and I'm getting conflicting info on whether pg_basebackup is supported--seems unlikely, along with other physical replication options. I can't use Azure's own tools to backup my server, it's too big. I can't use external tools, they can't have shell access. The last option is just pg_dump, which in my experience will take days to finish and may need to be performed on a point-in-time-restore so that performance isn't affected. I'd prefer to create and restore from a physical backup rather than logical, so this is a last resort. Is there a better option for a managed server this size than pg_dump?
zach (1 rep)
May 28, 2025, 11:05 PM • Last activity: Aug 5, 2025, 12:06 AM
0 votes
2 answers
145 views
Point in time Postgres restore
I was trying to do Point in Time Restore, I had my placed by previously backed up WAL file in my directory to where my restore command (/var/lib/postgresql/16/main/pg_wal/wal_archive) in postgres.conf points to, however my WAL files changes have not been applied: listen_addresses = '*' wal_level = r...
I was trying to do Point in Time Restore, I had my placed by previously backed up WAL file in my directory to where my restore command (/var/lib/postgresql/16/main/pg_wal/wal_archive) in postgres.conf points to, however my WAL files changes have not been applied: listen_addresses = '*' wal_level = replica max_connections = 100 archive_mode = on archive_command = 'test ! -f /var/lib/postgresql/16/main/pg_wal/wal_archive/%f && cp %p /var/lib/postgresql/16/main/pg_wal/wal_archive/%f' restore_command = 'cp /var/lib/postgresql/16/main/pg_wal/wal_archive/%f %p' #max_wal_senders = 10 #wal_log_hints = on And I have created empty recovery.signal file, and here is logs from restarting Postgres(my backed up file is named 000000010000000000000039): 2024-02-23 17:25:02.199 UTC LOG: starting PostgreSQL 16.2 (Debian 16.2-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 2024-02-23 17:25:02.199 UTC LOG: listening on IPv4 address "0.0.0.0", port 5432 2024-02-23 17:25:02.199 UTC LOG: listening on IPv6 address "::", port 5432 2024-02-23 17:25:02.200 UTC LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 2024-02-23 17:25:02.204 UTC LOG: database system was shut down at 2024-02-23 17:25:02 UTC cp: cannot stat '/var/lib/postgresql/16/main/pg_wal/wal_archive/00000003.history': No such file or directory 2024-02-23 17:25:02.207 UTC LOG: starting archive recovery 2024-02-23 17:25:02.209 UTC LOG: restored log file "00000002.history" from archive cp: cannot stat '/var/lib/postgresql/16/main/pg_wal/wal_archive/00000002000000000000003A': No such file or directory 2024-02-23 17:25:02.216 UTC LOG: consistent recovery state reached at 0/3A0000A0 2024-02-23 17:25:02.216 UTC LOG: invalid record length at 0/3A0000A0: expected at least 24, got 0 2024-02-23 17:25:02.216 UTC LOG: redo is not required 2024-02-23 17:25:02.217 UTC LOG: database system is ready to accept read-only connections cp: cannot stat '/var/lib/postgresql/16/main/pg_wal/wal_archive/00000002000000000000003A': No such file or directory cp: cannot stat '/var/lib/postgresql/16/main/pg_wal/wal_archive/00000003.history': No such file or directory 2024-02-23 17:25:02.223 UTC LOG: selected new timeline ID: 3 2024-02-23 17:25:02.263 UTC LOG: restored log file "00000002.history" from archive 2024-02-23 17:25:02.264 UTC LOG: archive recovery complete 2024-02-23 17:25:02.265 UTC LOG: checkpoint starting: end-of-recovery immediate wait 2024-02-23 17:25:02.270 UTC LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.001 s, total=0.006 s; sync files=2, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB; lsn=0/3A0000A0, redo lsn=0/3A0000A0 2024-02-23 17:25:02.275 UTC LOG: database system is ready to accept connections
Руслан Пилипюк (1 rep)
Feb 23, 2024, 05:41 PM • Last activity: Aug 4, 2025, 10:04 AM
4 votes
2 answers
643 views
RMAN-06169, but only when backing up via OEM 13c
When running a Level 1 or Level 0 backup via OEM 13c, the following RMAN error causes the run to fail. (The RMAN script is being ran with proper sysdba credentials from OEM 13c.) RMAN-06169: could not read file header for datafile 2 error reason 5 Oracle documentation suggests reason 5 indicates 'Un...
When running a Level 1 or Level 0 backup via OEM 13c, the following RMAN error causes the run to fail. (The RMAN script is being ran with proper sysdba credentials from OEM 13c.) RMAN-06169: could not read file header for datafile 2 error reason 5 Oracle documentation suggests reason 5 indicates 'Unable to open file'. However, if I execute the exact same RMAN statement directly on the server, the backup runs fine. We have removed the database from OEM and decommissioned the agent, the redeployed the agent as well as reinstated the database in OEM, but the error still occurred.
ca_elrod (61 rep)
Aug 7, 2018, 06:18 PM • Last activity: Aug 3, 2025, 02:06 AM
5 votes
1 answers
1521 views
Full database backup using xtrabackup stream
I'm new to Percona XtraBackup. I've been trying to perform a full backup stream from my local ( *test server around 600GB* ) to remote server. I just have some questions and I need guides, and I think this is the best place. I have this command, which I executed in my local `innobackupex --user=user...
I'm new to Percona XtraBackup. I've been trying to perform a full backup stream from my local ( *test server around 600GB* ) to remote server. I just have some questions and I need guides, and I think this is the best place. I have this command, which I executed in my local innobackupex --user=user --password=password --stream=tar /which/directory/ | pigz | ssh user@10.11.12.13 "cat - > /mybackup/backup.tar.gz" My questions are : - **My log scan is not changing / increasing** >> log scanned up to (270477048535) >> log scanned up to (270477048535) >> log scanned up to (270477048535) >> log scanned up to (270477048535) >> log scanned up to (270477048535) I've read a comment before and someone says log scan will not increase if no one is using the database. ( Yes, no one is using the database )  - **It's been running for a while.** I've tried to use xtrabackup to a local test server with around 1.7TB and finished in just a few hours. Is this because I'm using stream that's why it is slow? What is the purpose of "/which/directory/" in my command? Is it going to store the file in /which/directory/ first and then transfer to my remote server ? Why do I have to specify a directory? - **No created file on my local server /which/directory/ and to my remote server /mybackup/.** Am I doing something wrong ? Is there a much easier way to perform this? My only goal is to backup my local database to a remote server, I'm doing this stream because I don't have enough disk space to store my backup locally. I'm using MariaDB 5.5 and Percona XtraBackup 2.2
Thantanenorn (51 rep)
Mar 27, 2018, 01:45 AM • Last activity: Jul 31, 2025, 03:05 PM
0 votes
1 answers
379 views
error writing input to commandERROR: Invalid username and/or password
This is the error I have been getting on my RMAN notifcations on my backup jobs for the last day or so. The title is exactly how the message is formatted. > ~~~End Step Output/Error Log~~~ > Error Log:BackupOp > error writing input to commandERROR: Invalid username and/or password > > ~~~End Step Ou...
This is the error I have been getting on my RMAN notifcations on my backup jobs for the last day or so. The title is exactly how the message is formatted. > ~~~End Step Output/Error Log~~~ > Error Log:BackupOp > error writing input to commandERROR: Invalid username and/or password > > ~~~End Step Output/Error Log~~~ I only have 2 backup jobs and both are affected by this. Not sure how much information here is useful but what I have been looking at is the SYS account the jobs are using is not locked out. select account_status from dba_users where username = 'SYS'; ACCOUNT_STATUS -------------------------------- OPEN The emagent.trc log is showing this pair of lines for every backup job since the first one failed. 2019-02-06 23:44:26,791 Thread-14472 ERROR command: failed to write to stdin of process 6228: (error=232, no message available) 2019-02-06 23:44:26,791 Thread-14472 ERROR command: nmejcc_feedInput: received -11 from process write Restarting the DBConsole service has no affect and the errors persist. I can manually run full and archivelog backups just fine as sysdba. I see no reason to blame RMAN since I can run the backup rman scripts just fine. I think this is just an EM issue.
Matt (365 rep)
Feb 7, 2019, 05:05 PM • Last activity: Jul 30, 2025, 10:04 PM
0 votes
0 answers
12 views
wal-g/postgresql: How to turn continuous into punctual backups for sparsifying archival?
I would like to implement a backup scheme in which for the past week, I have daily full backups with complete WAL logs for point-in-time-recovery. Past a week, I would like to keep only daily snapshots (without PITR capabilities) for another week and then weekly snapshots for three months, monthly s...
I would like to implement a backup scheme in which for the past week, I have daily full backups with complete WAL logs for point-in-time-recovery. Past a week, I would like to keep only daily snapshots (without PITR capabilities) for another week and then weekly snapshots for three months, monthly snapshots for another year and permanent yearly snapshots. The details of the schedule don't matter but I am wondering how I could truncate the archived WALs such that for every base backup I decide to keep according to the schedule, I have just enough WAL to reach from the base backup to the closest consistent database state. I expect, I would need the sparsified daily full backups plus for each a relatively small amount of WAL archives. Is this something, I can achieve with WAL-G?
Tilman Vogel (101 rep)
Jul 30, 2025, 03:53 PM
0 votes
1 answers
889 views
Barman: What's the best way to switch from backup_method postgres (streaming) to rsync (ssh)?
My production postgresql server is currently backed up with barman using the backup_method postgres (streaming server). I would like to switch to `rsync` method (ssh) to benefit from better compression. Do I have to stop current process before reconfiguration, or can I modify the setup and activate...
My production postgresql server is currently backed up with barman using the backup_method postgres (streaming server). I would like to switch to rsync method (ssh) to benefit from better compression. Do I have to stop current process before reconfiguration, or can I modify the setup and activate it while running? Current streaming setup: conninfo = host=conn_host user=conn_user dbname=conn_db streaming_conninfo = host=conn_host user=streaming_user backup_method = postgres streaming_archiver = on slot_name = barman minimum_redundancy = 3 last_backup_maximum_age = '3 DAYS' retention_policy = 'RECOVERY WINDOW OF 31 DAYS' Future ssh setup: ssh_command = ssh postgres@conn_host conninfo = host=conn_host user=conn_user dbname=conn_db backup_method = rsync reuse_backup = link archiver = on network_compression = true minimum_redundancy = 3 last_backup_maximum_age = '3 DAYS' retention_policy = 'RECOVERY WINDOW OF 31 DAYS' wal_retention_policy = main
jean pierre huart (111 rep)
May 15, 2019, 01:35 PM • Last activity: Jul 28, 2025, 09:05 PM
0 votes
1 answers
1032 views
could not read symbolic link error while executing pg_start_backup function in Postgresql DB
In order to take data folder backup in postgres 9.6 database executed the order of functions for backup, while executing *pg_start_backup* function produced *warning : could not read symbolic link for pg_tblspace Invalid argument* ``` test=# checkpoint; CHECKPOINT test=# select pg_switch_xlog(); pg_...
In order to take data folder backup in postgres 9.6 database executed the order of functions for backup, while executing *pg_start_backup* function produced *warning : could not read symbolic link for pg_tblspace Invalid argument*
test=# checkpoint; 
       CHECKPOINT

test=# select pg_switch_xlog();
 pg_switch_xlog
----------------
 5/DF000238
(1 row)
**
test=# select pg_start_backup(to_char(now(), 'yyyy-mm-dd hh24:mi:ss'));
WARNING:  could not read symbolic link "pg_tblspc/oth_data_tbs": Invalid argument
WARNING:  could not read symbolic link "pg_tblspc/str_index": Invalid argument
WARNING:  could not read symbolic link "pg_tblspc/tbs_temp_default": Invalid argument
WARNING:  could not read symbolic link "pg_tblspc/oth_indx_tbs": Invalid argument
WARNING:  could not read symbolic link "pg_tblspc/str_data_ids": Invalid argument
WARNING:  could not read symbolic link "pg_tblspc/tbs_default": Invalid argument
WARNING:  could not read symbolic link "pg_tblspc/top_indx_tbs": Invalid argument
WARNING:  could not read symbolic link "pg_tblspc/rpt_data_tbs": Invalid argument
WARNING:  could not read symbolic link "pg_tblspc/str_data_h_index": Invalid argument
WARNING:  could not read symbolic link "pg_tblspc/rpt_indx_tbs": Invalid argument
WARNING:  could not read symbolic link "pg_tblspc/str_data": Invalid argument
WARNING:  could not read symbolic link "pg_tblspc/str_data_h": Invalid argument
WARNING:  could not read symbolic link "pg_tblspc/top_data_tbs": Invalid argument
 pg_start_backup
-----------------
 5/E0000028
(1 row)
rajv (1 rep)
Nov 11, 2020, 11:28 AM • Last activity: Jul 28, 2025, 01:00 AM
0 votes
1 answers
1738 views
Operation not allowed from within a pluggable database Pre-Built Developer VMs for Oracle VM VirtualBox
I downloaded **Pre-Built Developer VMs for Oracle VM VirtualBox** , which contains Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 I want to **BACKUP the Database using Rman**, but it is in NOARCHIVELOG mode, so it is necessary to shutdown and change the database to ARCHIVELOG mode, but wh...
I downloaded **Pre-Built Developer VMs for Oracle VM VirtualBox** , which contains Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 I want to **BACKUP the Database using Rman**, but it is in NOARCHIVELOG mode, so it is necessary to shutdown and change the database to ARCHIVELOG mode, but when I try to alter database to archive log , it gives an error saying operation not allowed from within a pluggable database. An operation was attempted that can only be performed in the root container. enter image description here
Tomas A (3 rep)
Jan 25, 2023, 12:07 PM • Last activity: Jul 27, 2025, 04:05 PM
0 votes
1 answers
2856 views
ORA-31626: job does not exist during schema export
I'm exporting schema with following command - expdp system/system@xxx.xxx.xxx.xxx:1521/orcl schemas=schema-name directory=test_dir dumpfile=Schema.dmp logfile=SchemaLog.log but it results into following error - Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production ORA-31626: job do...
I'm exporting schema with following command - expdp system/system@xxx.xxx.xxx.xxx:1521/orcl schemas=schema-name directory=test_dir dumpfile=Schema.dmp logfile=SchemaLog.log but it results into following error - Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production ORA-31626: job does not exist ORA-04063: package body "SYS.DBMS_INTERNAL_LOGSTDBY" has errors ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_INTERNAL_LOGSTDBY" ORA-06512: at "SYS.KUPV$FT", line 1009 ORA-04063: package body "SYS.DBMS_LOGREP_UTIL" has errors ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_LOGREP_UTIL" I googled a lot and tried solutions provided around ORA-31626: job does not exist and ORA-04063: package body "SYS.DBMS_INTERNAL_LOGSTDBY" has errors but none helped to solve the problem. Could you please help to resolve this?
Alpha (153 rep)
Dec 30, 2020, 08:58 AM • Last activity: Jul 27, 2025, 12:06 PM
1 votes
1 answers
8733 views
postgresql: Invalid data directory | Can't open PID file /var/run/postgresql/10-main.pid (yet?) after start: No such file or directory
This error is showing after running this command: `sudo systemctl status postgresql@10-main.service` postgresql@10-main.service - PostgreSQL Cluster 10-main Loaded: loaded (/lib/systemd/system/postgresql@.service; indirect; vendor preset: enabled) Active: failed (Result: protocol) since Tue 2020-12-...
This error is showing after running this command: sudo systemctl status postgresql@10-main.service postgresql@10-main.service - PostgreSQL Cluster 10-main Loaded: loaded (/lib/systemd/system/postgresql@.service; indirect; vendor preset: enabled) Active: failed (Result: protocol) since Tue 2020-12-01 14:21:40 UTC; 2s ago Process: 1603 ExecStart=/usr/bin/pg_ctlcluster --skip-systemctl-redirect 10-main start (code=exited, sta Dec 01 14:21:40 ubuntu-s-1vcpu-2gb-sgp1-01 systemd: Starting PostgreSQL Cluster 10-main... Dec 01 14:21:40 ubuntu-s-1vcpu-2gb-sgp1-01 postgresql@10-main: Error: Invalid data directory Dec 01 14:21:40 ubuntu-s-1vcpu-2gb-sgp1-01 systemd: postgresql@10-main.service: Can't open PID file /va Dec 01 14:21:40 ubuntu-s-1vcpu-2gb-sgp1-01 systemd: postgresql@10-main.service: Failed with result 'pro Dec 01 14:21:40 ubuntu-s-1vcpu-2gb-sgp1-01 systemd: Failed to start PostgreSQL Cluster 10-main. I have tried all solutions out there. I can't even take backup by pg_dump. Getting the following error while running pg_dump database_name > database_backup.bak: pg_dump: [archiver (db)] connection to database "database_name" failed: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? Here is the log of permissions of /var/lib/postgresql/10/main: -rwx------ 1 postgres postgres 3 Apr 28 2020 PG_VERSION drwx------ 7 postgres postgres 4096 Nov 9 23:36 base drwx------ 2 postgres postgres 4096 Nov 29 19:28 global drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_commit_ts drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_dynshmem drwx------ 4 postgres postgres 4096 Nov 29 20:18 pg_logical drwx------ 4 postgres postgres 4096 Apr 28 2020 pg_multixact drwx------ 2 postgres postgres 4096 Nov 29 19:27 pg_notify drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_replslot drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_serial drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_snapshots drwx------ 2 postgres postgres 4096 Nov 29 20:18 pg_stat drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_stat_tmp drwx------ 2 postgres postgres 4096 Sep 2 00:07 pg_subtrans drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_tblspc drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_twophase drwx------ 3 postgres postgres 4096 Nov 28 01:33 pg_wal drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_xact -rwx------ 1 postgres postgres 88 Apr 28 2020 postgresql.auto.conf -rwx------ 1 postgres postgres 95 Nov 29 19:27 postmaster.opts Here is the log after running sudo -u postgres psql this command: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? Please Assist me to fix this. Any kind of help will be appreciated. Thanks in advance.
Farid Chowdhury (119 rep)
Dec 1, 2020, 04:10 PM • Last activity: Jul 27, 2025, 02:04 AM
0 votes
2 answers
743 views
Does using the export option to backup a MySQL database (12GB size) cause any performance issues or locking on database?
I have a 12GB database. I want to take a compressed backup using the export option via phpmyadmin. Does this cause any performance impact or locking on the database?
I have a 12GB database. I want to take a compressed backup using the export option via phpmyadmin. Does this cause any performance impact or locking on the database?
variable (3590 rep)
Nov 9, 2021, 09:55 AM • Last activity: Jul 26, 2025, 02:03 AM
0 votes
2 answers
686 views
Restoring secondary backup files on primary server in sql server
I want to restore log backup file on copy-only full backup in restoring mode on primary replica server these backups were taken from secondary replica . I took copy only full back up on secondary replica then failover after that I took .trn backup file on primary replica(which took full backup ). bo...
I want to restore log backup file on copy-only full backup in restoring mode on primary replica server these backups were taken from secondary replica . I took copy only full back up on secondary replica then failover after that I took .trn backup file on primary replica(which took full backup ). both full and trn file backup from one server but full backup took when server in secondary mode and trn backup took when server be in primary backup and I want to restored both of them in restoring mode on secondary replica after failover. I wanted to add database in cluster with my hand because our databases are big it was error when they are restored on primary server the error is > The log in this backup set begins at LSN 38202000004450500001, which is too recent to apply to the database. An earlier log backup that includes LSN 38167000000015500001 can be restored.
NShyn (1 rep)
Jan 25, 2022, 09:57 AM • Last activity: Jul 25, 2025, 10:05 PM
0 votes
1 answers
147 views
How to better backup MySQL 5.6 database?
Today I had an issue where I accidentally deleted a whole table (which cascaded to a few other tables) due to a funky piece of MySQL syntax, see [here][1]. [1]: https://stackoverflow.com/questions/59583848/just-learned-the-hard-way-a-piece-of-mysql-sytax-why-does-this-work-and-what-d Luckily I had a...
Today I had an issue where I accidentally deleted a whole table (which cascaded to a few other tables) due to a funky piece of MySQL syntax, see here . Luckily I had a backup from midnight, however all data was lost between midnight and to when I deleted all the rows in my table. How can I better back up my database to be more up to date with the live system, or perhaps better just lagging a hour behind it? I've seen people mention replication and binary logs but this is all kind of new to me. From my understanding, having a replication setup, deleting my table in the master would also delete in the slave database? Would I be better off trying to take hourly differential backups? Or should i just do full backups every hour? Our database is tiny, less than a gig, and we have room so it's feasible, but I would rather do things a proper way that would work for larger databases as well.
bjk116 (225 rep)
Jan 3, 2020, 07:56 PM • Last activity: Jul 25, 2025, 09:05 AM
1 votes
2 answers
6615 views
Restore Postgres 9.0 backup into Postgres 9.3
I'm having a problem restoring my Postgres 9.0 database into a 9.3 version. I did the backup with this command: pg_dump -U user database -f /root/mydb.sql And when I send the backup to the new server and try to restore it I get this error: psql.bin:/root/mydb.sql:5628157: invalid command \n psql.bin...
I'm having a problem restoring my Postgres 9.0 database into a 9.3 version. I did the backup with this command: pg_dump -U user database -f /root/mydb.sql And when I send the backup to the new server and try to restore it I get this error: psql.bin:/root/mydb.sql:5628157: invalid command \n psql.bin:/root/mydb.sql:5628158: invalid command \n psql.bin:/root/mydb.sql:5628159: invalid command \n psql.bin:/root/mydb.sql:5628160: invalid command \n psql.bin:/root/mydb.sql:5628161: invalid command \n psql.bin:/root/mydb.sql:5628162: invalid command \n psql.bin:/root/mydb.sql:5628163: invalid command \n psql.bin:/root/mydb.sql:5628164: invalid command \n psql.bin:/root/mydb.sql:5628165: invalid command \n psql.bin:/root/mydb.sql:5628166: ERROR: syntax error at or near "PageRange" LINE 1: PageRange=1-2highFoto_ALTA
jask (113 rep)
Aug 26, 2016, 02:05 PM • Last activity: Jul 22, 2025, 10:22 AM
0 votes
1 answers
144 views
GRANT REPLICATION SLAVE ON - ERROR 1064 (42000): You have an error in your SQL syntax
I'm setting up mysql replication using xtrabackup guide https://docs.percona.com/percona-xtrabackup/innovation-release/set-up-replication.html#2-copy-backed-up-data-to-the-replica On Master/Source I'm running this command GRANT REPLICATION SLAVE ON *.* TO 'root'@'12.34.56.78' IDENTIFIED BY 'asdfd34F...
I'm setting up mysql replication using xtrabackup guide https://docs.percona.com/percona-xtrabackup/innovation-release/set-up-replication.html#2-copy-backed-up-data-to-the-replica On Master/Source I'm running this command GRANT REPLICATION SLAVE ON *.* TO 'root'@'12.34.56.78' IDENTIFIED BY 'asdfd34F4fg'; Gives error: ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'IDENTIFIED BY 'asdfd34F4fg'' at line 1 How to fix it?
Vaughn (101 rep)
Feb 11, 2024, 10:57 AM • Last activity: Jul 21, 2025, 06:02 PM
0 votes
1 answers
1522 views
End of file reached. Error 38 in SQL. Passes verification checks
Having issues restoring a database in SSMS, error 38. The databases are transferred up to google via their Drive API. Download it on my side and has error. If I transfer it through teamviewer from client PC I still get the same error. Only happening on two clients the rest are fine. I have tried bac...
Having issues restoring a database in SSMS, error 38. The databases are transferred up to google via their Drive API. Download it on my side and has error. If I transfer it through teamviewer from client PC I still get the same error. Only happening on two clients the rest are fine. I have tried backing up with FORMAT options and WITH CHECKSUM. Both succeed and backup is taking when verifying. I have ran restore fileheaders only and get proper data except the physicalname path has a lowercase 'c' for the drive on the problem databases. I have read this could be a corrupt database but I don't understand how the checksum and verify sql functions pass if so. Any insight would really help. This is the backup command used in C# "BACKUP DATABASE " + database + " TO DISK = '" + database + "_" + dateTime.ToString("yyyy_MM_dd_HH") + "_CloudBackup.bak' WITH NAME = 'database', INIT, CHECKSUM" EDIT: Running
check
on the database produced no errors. I have tried updated the physical name of the .mdf and .ldf it does not work still. Taking a backup from SSMS produces a valid backup file. It has something to do with the services I wrote. EDIT 2: I am restoring through SSMS GUI. I have also tried
RESTORE DATABASE db_name FROM 'backup_path'
Commands
RESTORE VERIFYONLY FROM DISK = 'backup_path'
On both computers, mine and the server PC yields "The backup set on file 1 is valid."
RESTORE FILELISTONLY FROM DISK = 'backup_path'
Have only test on my PC. Returns paths to mdf and ldf, no errors. Both PCs have sql server 2012 (SP1)
Austen Swanson (1 rep)
Jul 23, 2019, 08:49 PM • Last activity: Jul 20, 2025, 08:02 AM
Showing page 1 of 20 total questions