Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
3726
views
Oracle standby databases monitoring
I have many databases for which I have configured data guard setup. When it comes to monitoring the standby's, I use OEM to check the lag and I have enabled mail alerts as well as and when gap is more than 50 archives. But the thing what I recently observed is that in one of my standby databases my...
I have many databases for which I have configured data guard setup. When it comes to monitoring the standby's, I use OEM to check the lag and I have enabled mail alerts as well as and when gap is more than 50 archives.
But the thing what I recently observed is that in one of my standby databases my recovery got stopped for one or more reasons and unfortunately lag also did not go above 50 so there was no alerts. In such a situation how I can ensure when my standby's recovery MRP itself is not active and I should be alerted in such a situation..
How is it possible? Any soultions?
tesla747
(1910 rep)
Jan 4, 2016, 01:16 PM
• Last activity: Aug 7, 2025, 03:03 AM
0
votes
1
answers
195
views
Enabling standby node Wal streaming without pg_basebackup
So currently i'm planning to do a migration to RDS where i'm using a snapshot of our master and doing a pg_upgrade on a new machine(reasons being so that if the upgrade fails for whatever reason, we still have our old postgres instance around) For good practice, i'd like to have a hot_standby as wel...
So currently i'm planning to do a migration to RDS where i'm using a snapshot of our master and doing a pg_upgrade on a new machine(reasons being so that if the upgrade fails for whatever reason, we still have our old postgres instance around)
For good practice, i'd like to have a hot_standby as well. The problem is that pg_basebackup takes too long. I tried to be clever and took a snapshot of our standby instance(the original setup has a master and standby setup), did a pgupgrade and attempted to make them communicate to each other, but failed.
Is there a way to do what i'm trying to do, or is it impossible?
thank you
Apprentice Programmer
(101 rep)
Jul 22, 2021, 08:47 AM
• Last activity: Jun 16, 2025, 08:03 AM
3
votes
0
answers
545
views
Cannot add standby database to the data broker configuration in oracle 11g
I have duplicated the primary database to standby database and then I added the primary database in data broker configuration. But when I'm trying to add standby database I'm getting following error: DGMGRL> add database 'orcl_standby' as connect identifier is orcl_standby maintained as physical; Er...
I have duplicated the primary database to standby database and then I added the primary database in data broker configuration. But when I'm trying to add standby database I'm getting following error:
DGMGRL> add database 'orcl_standby' as connect identifier is orcl_standby
maintained as physical;
Error: ORA-16606: unable to find property ""
Error: ORA-16554: translation not valid
Failed.
Is this a problem with
log_archive_dest_state_2
? I have enable it.
Or is there any problem in DR2ORCL.DAT files?
Any help will be appreciated.
Kenny
(377 rep)
Aug 26, 2017, 11:36 AM
• Last activity: Jun 11, 2025, 03:20 AM
1
votes
0
answers
22
views
BARMAN backup issue in edb postgresql16
I am facing BARMAN backup issue in EDB postgresql 16 database. 1> For taking incremental backup,I had changed backup_method in barman.conf file from postgres to rsync. Kept any other parameter same [root@innolx4057 barman.d]# cat my_postgres_server.conf [my_postgres_server] description = "My Postgre...
I am facing BARMAN backup issue in EDB postgresql 16 database.
1> For taking incremental backup,I had changed backup_method in barman.conf file from postgres to rsync. Kept any other parameter same
[root@innolx4057 barman.d]# cat my_postgres_server.conf
[my_postgres_server]
description = "My PostgreSQL Server"
ssh_command = ssh dnb-reporting@10.184.40.57
conninfo = host=10.184.40.57 user=barman dbname=dnb-reporting port=5444
#streaming_conninfo = host=10.184.40.57 user=barman
#backup_method = postgres
backup_method = rsync
streaming_archiver = on
retention_policy = RECOVERY WINDOW OF 7 DAYS
#archiver = on
slot_name = barman
create_slot = auto
and then tried backup But getting below error
[barman@innolx4057 ~]$ barman cron
WARNING: No backup strategy set for server 'my_postgres_server' (using default 'concurrent_backup').
Starting WAL archiving for server my_postgres_server
[barman@innolx4057 ~]$ barman check all
WARNING: No backup strategy set for server 'my_postgres_server' (using default 'concurrent_backup').
Server my_postgres_server:
ssh: FAILED (Connection failed using 'ssh dnb-reporting@10.184.40.57 -o BatchMode=yes -o StrictHostKeyChecking=no' return code 255)
archiver errors: FAILED (duplicates: 1)
** Kindly suggest what I am doing wrong here. Reverting from rsync to postgres again running the backup successfully**
############################################################
2> After I switchover, I am facing issue to run backup (with backup_method=postgres)
[barman@innolx4058 ~]$ barman backup --reuse-backup=off --name backup_20250528_full_1 my_postgres_server
ERROR: Impossible to start the backup. Check the log for more details, or run 'barman check my_postgres_server'
2025-05-28 13:54:17,143 barman.command_wrappers INFO: my_postgres_server: pg_receivewal: starting log streaming at 1/24000000 (timeline 1)
2025-05-28 13:54:17,144 barman.command_wrappers INFO: my_postgres_server: pg_receivewal: error: could not send replication command "START_REPLICATION": ERROR: requested starting point 1/24000000 on timeline 1 is not in this server's history
2025-05-28 13:54:17,144 barman.command_wrappers INFO: my_postgres_server: DETAIL: This server's history forked from timeline 1 at 0/410000A0.
2025-05-28 13:54:17,145 barman.command_wrappers INFO: my_postgres_server: pg_receivewal: error: disconnected
2025-05-28 13:54:17,147 barman.server ERROR: ArchiverFailure:pg_receivewal terminated with error code: 1
2025-05-28 13:54:24,389 barman.server ERROR: Check 'replication slot' failed for server 'my_postgres_server'
2025-05-28 13:54:24,392 barman.server ERROR: Check 'receive-wal running' failed for server 'my_postgres_server'
2025-05-28 13:54:30,722 barman.server ERROR: Check 'replication slot' failed for server 'my_postgres_server'
2025-05-28 13:54:30,724 barman.server ERROR: Check 'receive-wal running' failed for server 'my_postgres_server'
2025-05-28 13:54:30,725 barman.server ERROR: Impossible to start the backup. Check the log for more details, or run 'barman check my_postgres_server'
2025-05-28 13:55:02,140 barman.utils INFO: Cleaning up lockfiles directory.
2025-05-28 13:55:02,315 barman.wal_archiver INFO: No xlog segments found from streaming for my_postgres_server.
2025-05-28 13:55:02,401 barman.server INFO: Starting receive-wal for server my_postgres_server
2025-05-28 13:55:02,403 barman.server ERROR: ArchiverFailure:pg_receivewal not present in $PATH
I tried:
[barman@innolx4058 ~]$ which pg_receivewal
/usr/edb/as16/bin/pg_receivewal
[barman@innolx4058 ~]$ barman switch-wal --archive --archive-timeout 60 my_postgres_server
No switch required for server 'my_postgres_server'
Waiting for a WAL file from server 'my_postgres_server' to be archived (max: 60 seconds)
^CERROR: Process interrupted by user (KeyboardInterrupt)
[barman@innolx4058 ~]$ barman receive-wal my_postgres_server
Starting receive-wal for server my_postgres_server
my_postgres_server: pg_receivewal: starting log streaming at 1/24000000 (timeline 1)
my_postgres_server: pg_receivewal: error: could not send replication command "START_REPLICATION": ERROR: requested starting point 1/24000000 on timeline 1 is not in this server's history
my_postgres_server: DETAIL: This server's history forked from timeline 1 at 0/410000A0.
my_postgres_server: pg_receivewal: error: disconnected
ERROR: ArchiverFailure:pg_receivewal terminated with error code: 1
Cassandra Thrift
(307 rep)
May 28, 2025, 09:27 AM
0
votes
1
answers
625
views
Archive logs Pg_archive cleaner
I have Postgres installed on windows 10. I have master-slave replication ( 1 pc is master and 2 other pc are slaves). I have archive logs saved on a file on master pc and slaves are connected to that file. That file has become quite big and i need to know how to remove log files with pg_archive clea...
I have Postgres installed on windows 10. I have master-slave replication ( 1 pc is master and 2 other pc are slaves). I have archive logs saved on a file on master pc and slaves are connected to that file. That file has become quite big and i need to know how to remove log files with pg_archive cleaner. I also have automatic backup jobs on every day. 😊
Anisa Ibro
(31 rep)
May 5, 2021, 04:44 PM
• Last activity: Mar 14, 2025, 10:05 AM
0
votes
1
answers
654
views
Dataguard the datafiles are created in another diskgroup
I have a recently made physical dataguard configuration and everything works correctly so far, but I've noticed that the datafiles that are created in the primary are created in the standby in a different route, the environment is as follows: Primary - db_name: oradb, db_unique_name: oradb. Oracle 1...
I have a recently made physical dataguard configuration and everything works correctly so far, but I've noticed that the datafiles that are created in the primary are created in the standby in a different route, the environment is as follows:
Primary - db_name: oradb, db_unique_name: oradb. Oracle 11gR2. ASM storage. Diskgoups: SYS, SYS2 and DATA
Standby - db_name: oradb, db_unique_name: oradb2. Oracle 11gR2. ASM storage. Diskgoups: SYS, SYS2 and DATA.
When creating a tablespace in the primary, example:
CREATE TABLESPACE TBS_DATA DATAFILE
'+DATA' SIZE 50M AUTOEXTEND ON NEXT 1M MAXSIZE 100M ...
the respective datafile is created in the diskgroup DATA ... +DATA/oradb/datafile/tbs_data...
all good, the doubt that I have is that I realized that in the Standby database the datafile is created in the diskgroup SYS +SYS/oradb2/datafile/tbs_data...
, what should I do to create the datafiles in the correct path, that is +DATA/oradb2/datafile/tbs_data...?
Thank you very much in advance
Arnaldo Raxach
(145 rep)
May 7, 2019, 05:17 PM
• Last activity: Nov 17, 2024, 12:03 PM
0
votes
1
answers
43
views
Bloat on replcia (standby)
To detect bloat we run some heavy queries, but since they may affect the db server we do it before or after primetime. Could we run these queries (basically, check bloat) on the physical replication standby server? Since bloat is replicating too, it should work.. I don't have resources to test it an...
To detect bloat we run some heavy queries, but since they may affect the db server we do it before or after primetime.
Could we run these queries (basically, check bloat) on the physical replication standby server?
Since bloat is replicating too, it should work.. I don't have resources to test it and I couldn't find a word about it on the internet.
Baby
(1 rep)
Sep 18, 2024, 07:35 AM
• Last activity: Sep 18, 2024, 07:43 AM
0
votes
0
answers
27
views
New Oracle Reincation created
I have an Oracle 19c environment, where the primary has a standby and the standby has its own standby (Cascading DR). Somehow the standby's DR's incarnation changed to a new one. In the logs, I get this: ``` rfs (PID:1286351): A new recovery destination branch has been registered rfs (PID:1286351):...
I have an Oracle 19c environment, where the primary has a standby and the standby has its own standby (Cascading DR).
Somehow the standby's DR's incarnation changed to a new one. In the logs, I get this:
rfs (PID:1286351): A new recovery destination branch has been registered
rfs (PID:1286351): Standby in the future of new recovery destination branch(resetlogs_id) 1177883943
rfs (PID:1286351): Incomplete Recovery SCN:0x000000003b52f9a6
rfs (PID:1286351): Resetlogs SCN:0x0000000039031be8
rfs (PID:1286351): SBPS:0x0000000039031be5
rfs (PID:1286351): Flashback database to SCN:0x0000000039031be5 (956505061) to follow new branch
rfs (PID:1286351): New Archival REDO Branch(resetlogs_id): 1177883943 Prior: 1115909325
rfs (PID:1286351): Archival Activation ID: 0x9cc516c2 Current: 0x9c3156ad
rfs (PID:1286351): Effect of primary database OPEN RESETLOGS
rfs (PID:1286351): Managed Standby Recovery process is active
there has been no such activity on the primary database which has resulted in such behavior.
Can someone explain this?
datascinalyst
(105 rep)
Aug 26, 2024, 12:26 PM
1
votes
1
answers
267
views
How to copy “Standby / Read-Only” database from one VM to other VM?
I am trying to upgrade from SQL 2016 to SQL 2019 (as SQL 2016 will be out of service in July 2016) in a different VM (Virtual Machine - Windows Server 2022), so I want to copy one database to other machine. I have a database (in SQL 2016) that is in “Standby / Read-Only” mode because there is a nigh...
I am trying to upgrade from SQL 2016 to SQL 2019 (as SQL 2016 will be out of service in July 2016) in a different VM (Virtual Machine - Windows Server 2022), so I want to copy one database to other machine.
I have a database (in SQL 2016) that is in “Standby / Read-Only” mode because there is a nightly job that runs to feed 24 hourly log files from a third party's SFPT server.
Bottom is screenshot of how it is set up at the beginning when these three databases were created: Recovery state: "RESTORE WITH STANDBY" :
I am trying to find a solution to transfer data (mdf) and log (ldf) file from original VM (that has SQL 2016) to a new VM (that has SQL 2019) and create a new database as “Standby / Read-Only” mode.
I stopped SQL in SQL 2016, and I was able to move both data and log files.
Now, when I tried to attach both data (mdf) and log (log) file in SQL 2019, I got an error saying:
"Cannot attach a database that was being restored. (Microsoft SQL Server, Error: 1824)
So, I am curious whether I have to change the mode of original database from “Standby / Read-Only” to Regular mode before moving mdf and ldf files.
Or is it even possible?
Sorry. I called bottom mode as "Regular" mode (I am not sure it is correct though).
If that is the only solution (change the mode from “Standby / Read-Only” mode to regular), can I create a database as “Standby / Read-Only” mode in SQL 2019 (on other VM) without any damage to the data or interruption of log file feed?



Java
(253 rep)
Aug 3, 2024, 12:44 AM
• Last activity: Aug 6, 2024, 08:51 PM
0
votes
1
answers
170
views
How to recover the standby database after opening
If I have standby which is db role is PHYSICAL STANDBY and it is MOUNTED. So, when I open it read/write the standby will be destroyed. (I have DataGuard) So, basically do I need to mount it again and start the recover process to fix it or does this envolve more steps?
If I have standby which is db role is PHYSICAL STANDBY and it is MOUNTED. So, when I open it read/write the standby will be destroyed. (I have DataGuard)
So, basically do I need to mount it again and start the recover process to fix it or does this envolve more steps?
paganini
(138 rep)
Feb 19, 2024, 03:10 PM
• Last activity: Feb 19, 2024, 03:22 PM
0
votes
2
answers
428
views
PostgreSQL 12 streaming replication standbys timing out due to autovacuum tasks
We have PostgreSQL 12.9 running on Ubuntu 20.04.3 LTS in a streaming replication setup with 1 primary and 3 standbys where the following options have been configured on all servers: - hot_standby=on - hot_standby_feedback=on - max_standby_archive_delay=30s - max_standby_streaming_delay=30s - wal_rec...
We have PostgreSQL 12.9 running on Ubuntu 20.04.3 LTS in a streaming replication setup with 1 primary and 3 standbys where the following options have been configured on all servers:
- hot_standby=on
- hot_standby_feedback=on
- max_standby_archive_delay=30s
- max_standby_streaming_delay=30s
- wal_receiver_status_interval=10s
And on the primary only, we have configured:
- vacuum_defer_cleanup_age=1000
Our largest table is about 1TB in size (half for heap, half for index) with 6.5B rows, and is also the most active one with INSERTS/DELETES occurring frequently, so for that table, we have specified **autovacuum_vacuum_scale_factor=0.001**. Other server-level settings for autovacuum are:
- autovacuum_analyze_scale_factor=0.01
- autovacuum_analyze_threshold=40
- autovacuum_freeze_max_age=500000000
- autovacuum_max_workers=20
- autovacuum_multixact_freeze_max_age=400000000
- autovacuum_naptime=1s
- autovacuum_vacuum_cost_delay=10ms
- utovacuum_vacuum_cost_limit=10000
- autovacuum_vacuum_scale_factor=0.005
- autovacuum_vacuum_threshold=50
- autovacuum_work_mem=-1
- log_autovacuum_min_duration=100ms
Normally this works well and autovacuum kicks in without issue. However, now that we've seen an increase in TPS on our standby traffic, we are getting many 57014 errors on them "**canceling statement due to user request**" after about half hour, which is normally how long it takes when actual tuples are removed. This causes spikes in user connections making the clients unstable.
Also we see many of the SELECTs are being blocked, **waiting for AccessShareLock** from the autovacuum process.
We have traced each of these spikes to the primary which also has an associated 57014 error "**canceling autovacuum task**". When this occurs we can see no autovacuum is being done for quite some time (several hours).
Things we've tried:
- Rebuilding indexes to remove bloat. (There is no table bloat, only index bloat).
- Increasing vacuum_defer_cleanup_age. (Originally the value was 0, and we tried increasing it from 200->400->1000 with no noticeable impact).
In the end, we've had to suspend autovacuum on the table in order to stabilize the system. We still have a nightly job that kicks in to manually run VACUUM (ANALYZE) on the database, but we are hoping there's a better solution.
Diane
(151 rep)
Feb 17, 2023, 07:48 PM
• Last activity: Sep 22, 2023, 07:20 PM
-1
votes
1
answers
2853
views
I want to change the location of redo files on standby db
my db is generating redofiles to a certain path, lets call it 'old/path', and i want to move it to a new path, lets call it 'new/path' in oracle can somebody help?
my db is generating redofiles to a certain path, lets call it 'old/path', and i want to move it to a new path, lets call it 'new/path' in oracle
can somebody help?
dbafix
(64 rep)
May 23, 2023, 12:34 PM
• Last activity: May 24, 2023, 12:31 PM
6
votes
0
answers
411
views
Right way to set up a secondary citus worker
I am experimenting with the postgres-addon citus. So far I have setup a cluster with one coordinator and three worker nodes. Now I want to setup a secondary for one of the worker nodes. So far I have setup streaming replication from that worker node to the fourth (standby) node. Do I have to add the...
I am experimenting with the postgres-addon citus.
So far I have setup a cluster with one coordinator and three worker nodes.
Now I want to setup a secondary for one of the worker nodes.
So far I have setup streaming replication from that worker node to the fourth (standby) node.
Do I have to add the standby node as secodary now with add_node and noderole "secondary"?
**citus_add_secondary_node** is better, since you have to define the suiting *primary* node with that command?
Or is it simply enough to setup a fresh database with citus extension and the *add_node/secondary* will turn that node into a secondary with all the data from the worker?
I get the right group-id for the secondary with this command:
select * from pg_dist_node;
Nils
(149 rep)
Nov 17, 2022, 02:14 PM
• Last activity: Dec 13, 2022, 09:59 PM
0
votes
1
answers
620
views
Oracle Data Guard - Wrong Path with new created Data Files
I've created a Standby Database with the following duplicate command: run { duplicate target database for standby from active database spfile parameter_value_convert 'ORCL','ORCLSTBY' set db_name='ORCL' set db_unique_name='ORCLSTBY' set db_create_file_dest='/u01/app/oracle/oradata/ORCLSTBY' set db_r...
I've created a Standby Database with the following duplicate command:
run {
duplicate target database for standby from active database
spfile
parameter_value_convert 'ORCL','ORCLSTBY'
set db_name='ORCL'
set db_unique_name='ORCLSTBY'
set db_create_file_dest='/u01/app/oracle/oradata/ORCLSTBY'
set db_recovery_file_dest='/u01/app/oracle/oradata/ORCLSTBY'
set db_file_name_convert='/ORCL/','/ORCLSTBY/'
set log_file_name_convert='/ORCL/','/ORCLSTBY/'
set standby_file_management='AUTO'
;
}
During the duplicate command the existing datafile location are correctly overwritten from '/u01/app/oracle/oradata/ORCL' to '/u01/app/oracle/oradata/ORCLSTBY' on Stanby Instance.
After I've activated the Data Guard Broker:
DGMGRL> CREATE CONFIGURATION adgconfig AS PRIMARY DATABASE IS ORCL CONNECT IDENTIFIER IS ORCL;
DGMGRL> ADD DATABASE ORCLSTBY AS CONNECT IDENTIFIER IS ORCLSTBY MAINTAINED AS PHYSICAL;
DGMGRL> ENABLE CONFIGURATION;
When I try to create a new datafile on primary instance here comes the problems:
ALTER TABLESPACE TBS_TEST ADD DATAFILE '/u01/app/oracle/oradata/ORCL/TEST02.dbf' size 100m;
On the Standby side the new file is create under the wrong directory with worng name (example: /u01/app/oracle/oradata/ORCLSTBY/datafile/qwt9yr_test2.dbf)
Anyone know what's wrong on my configuration?
Thank you
msak85
(3 rep)
Oct 24, 2022, 09:35 AM
• Last activity: Oct 24, 2022, 09:52 AM
0
votes
1
answers
4995
views
PostgreSQL 12 dropped replication slot but standby is still trying to contact the primary
We have recently decommissioned a standby that was experiencing some hardware issues from our streaming replication cluster. We were able to successfully stop the PID on the primary and then drop the replication slot using **SELECT pg_drop_replication_slot(' ');**. Confirmed by running **SELECT * FR...
We have recently decommissioned a standby that was experiencing some hardware issues from our streaming replication cluster. We were able to successfully stop the PID on the primary and then drop the replication slot using **SELECT pg_drop_replication_slot('');**. Confirmed by running **SELECT * FROM pg_catalog.pg_replication_slots;**
We deleted the **standby.signal** file and commented out **primary_conninfo** in the **postgresql.conf** file. I even checked the **conf.d** sub-directory to make sure there weren't any additional settings for primary_conninfo in there.
We removed the entries from the HBA settings from the primary and remaining standbys, but now we are seeing HBA errors from the standby still trying to reach the primary. We would rather not remove the data directory at this point and want to troubleshoot the hardware issue on the standby so we can eventually add it back in without having to re-install Postgres to do so.
Why is it still trying to contact the primary and what are we missing here I do notice that hot_standby is still set on but should that matter? Also ran **SELECT pg_wal_replay_pause();** on the standby for good measure but it didn't change anything.
NOTE: I can also tell from **SELECT pg_size_pretty(sum(size))
FROM pg_ls_waldir();** that the WAL logs are not backing up.
Diane
(151 rep)
Sep 28, 2021, 05:03 PM
• Last activity: Oct 19, 2022, 11:03 AM
0
votes
2
answers
231
views
Wal position to wal archive number
I want to manage the copy of archiving to remote location. For that I want to delete all the archives I copied unless they are required by the standby. So, for running `pg_archivecleanup $ARCLOC $CHECKPOINT` I need to find the most recent checkpoint which is ok to delete. I don't look on the recent...
I want to manage the copy of archiving to remote location.
For that I want to delete all the archives I copied unless they are required by the standby.
So, for running
pg_archivecleanup $ARCLOC $CHECKPOINT
I need to find the most recent checkpoint which is ok to delete. I don't look on the recent .backup
files as it is too old for my archive capacity. What I do want to look on is the replay_location
\ replay_lsn
(depend on the version) from pg_stat_replication
.
However, I do not know how to convert from replay_location
value to checkpoint value. I.e. from something like 7/2DBCED18
to 000000020000000700000029
.
**Answer:**
Function is: pg_xlogfile_name
Nir
(529 rep)
Jul 1, 2018, 12:00 PM
• Last activity: Sep 26, 2022, 08:17 AM
0
votes
2
answers
290
views
in PostgreSQL REPMGR Master-Slave environment is it necessary to have same H/W config for HOT STANDBY setup?
I have a **PostgreSQL repmgr Master-Slave setup with HOT standby enabled**. In such repmgr configuration isnt it possible to configure the postgresql.conf in Primary and secondary DIFFERENTLY WHILE HOT STANDBY WILL BE ON. I mean parameters below: - *shared_buffers,* - *max_parallel_workers,* - *max_...
I have a **PostgreSQL repmgr Master-Slave setup with HOT standby enabled**. In such repmgr configuration isnt it possible to configure the postgresql.conf in Primary and secondary DIFFERENTLY WHILE HOT STANDBY WILL BE ON. I mean parameters below:
- *shared_buffers,*
- *max_parallel_workers,*
- *max_worker_process,*
- *max_connections*
etc MUST have to be same or different? If it is different then I saw the DB in secondary through some Errors. So, it ill require same H/W (Resources like RAM, CPU same) setup for both master-slave which is costly enough.
Please, I cant start my slave with less config and need a solution to work on less H/W Slave environment to be worked as HOT standby.
fahad_dba
(1 rep)
Sep 9, 2022, 04:02 PM
• Last activity: Sep 11, 2022, 06:52 AM
0
votes
0
answers
171
views
Sql Error Message ID 3013 & 4305 for Standby / Read-Only database that we use for import data heavily
Sorry. I am not a DBA, but a SQL Developer, so I am trying to get some general support/feedback on my case (and when I Googled the issue, remedy seems so scattered so I decided to post here). I got this error message on SQL Server Agent for one of "Standby / Read-Only" databases. [![enter image desc...
Sorry. I am not a DBA, but a SQL Developer, so I am trying to get some general support/feedback on my case (and when I Googled the issue, remedy seems so scattered so I decided to post here).
I got this error message on SQL Server Agent for one of "Standby / Read-Only" databases.
**SQL Server Error 3013 and 4305**.
Bottom is some of error messages:
6_Exec SQL Nightly_StandByReporting_Run0235AM_PST,Error,1,
BNSQL2\PCCDAILY,6_Exec SQL Nightly_StandByReporting_Run0235AM_PST,Exec,,Executed as user:
BNSQL2\JDoh. ...0000001
which is too recent to apply to the database.
An earlier log backup that includes LSN 201999000000891200001 can be restored.
[SQLSTATE 42000] (Error 4305) RESTORE LOG is terminating abnormally.
[SQLSTATE 42000] (Error 3013) The log in this backup set begins at LSN
203741000000458400001
which is too recent to apply to the database. An earlier log backup that includes LSN
201999000000891200001 can be restored.
[SQLSTATE 42000] (Error 4305) RESTORE LOG is terminating abnormally.
[SQLSTATE 42000] (Error 3013) The log in this backup set begins at LSN
203744000000476800001
which is too recent to apply to the database. An earlier log backup that includes LSN
201999000000891200001 can be restored.
[SQLSTATE 42000] (Error 4305) RESTORE LOG is terminating abnormally.
[SQLSTATE 42000] (Error 3013) The log in this backup set begins at LSN
203749000000381600001 which is too recent to apply to the database.
An earlier log backup that includes LSN 201999000000891200001 can be restored.
[SQLSTATE 42000] (Error 4305) RESTORE LOG is terminating abnormally.
[SQLSTATE 42000] (Error 3013) The log in this backup set begins at LSN
203788000000038400001 which is too recent to apply to the database.
When I Googled, this is brief reason.
"BACKUP DATABASE is terminating abnormally"
I am not sure why this happened, but I can only guess that this happened due to some type of memory or hardware issues because we use this database **heavily to import data** (into other database).
When I searched more for remedies, following are given (from this site ):
- Check Security Permission for user
- Delete the Previous Backup
- Perform Full Backup Restoration
I am not sure how to apply for **"Standby / Read-Only" database though**.
I would appreciate for any help to fix the issue.

Java
(253 rep)
Aug 9, 2021, 10:03 PM
• Last activity: Aug 9, 2021, 10:20 PM
0
votes
1
answers
1131
views
Data Guard 12.2 without Force Logging
I would like to know your opinion and experience in a specific setup I want to apply to a Production Database that is configured in Oracle Data Guard with one Standby. It is not an active data guard environment. Since the beginning we have been using FORCE LOGGING, as recommended, but I started to r...
I would like to know your opinion and experience in a specific setup I want to apply to a Production Database that is configured in Oracle Data Guard with one Standby. It is not an active data guard environment. Since the beginning we have been using FORCE LOGGING, as recommended, but I started to realize that there are many operatives in nologging mode which are creating a lot of redo and archive files, with the performance downtime associated.
Given that this Data Guard has no automatic failover and that the RTO is of 4 hours ( database is not very big either ) I came up with a document in Oracle Support regarding scenarios where its possible to disable force logging on the primary ( 12.2 onwards )
How to Resolve ORA-1578 /ORA-26040 Nologging operation in 12.2 on Standby database (Doc ID 2378500.1)
My scenario is as follows:
- There is a process running every five minutes that refreshes a set of tables using CTAS in nologging.
- As a consequence, the process is generation huge amounts of redo and archive logs.
- Those tables which are recreated every five minutes are stored in an independent tablespace where no other schema contains any segment.
My idea is to disable FORCE LOGGING on Primary, and in case of failover to the standby, which is not automatic, then perform
RMAN>RECOVER DATABASE NONLOGGED BLOCK ;
Do you consider something wrong with this approach ? Any other point should I be aware of ?
Thank you in advance for your suggestions and ideas.
Roberto Hernandez
(143 rep)
Jun 24, 2021, 11:30 AM
• Last activity: Jun 24, 2021, 11:48 AM
2
votes
1
answers
2416
views
Strategy on Standby restore option for hourly log shipping file
I am trying to come up with a strategy of keeping up with daily database update process using "standby" restore mode. I am getting 24 log shipping files (for the previous day's each hour transaction log files) from a third party's FTP site. I would update these 24 files on a nightly run. I get these...
I am trying to come up with a strategy of keeping up with daily database update process using "standby" restore mode.
I am getting 24 log shipping files (for the previous day's each hour transaction log files) from a third party's FTP site. I would update these 24 files on a nightly run.
I get these files originally in SQB file formats, and then I have a tool and script to convert these SQB files into BAK file format.
Now, I am trying to come up with a strategy of continuous back up plan.
Database does not have to be updated or modified, but just to be read.
**Do I restore each transactional log file as "standby" all the time and just leave them as "standby" mode?**
I am planning to create a separate database to retrieve only necessary data from some tables from this "read-only" database.
I have one more question. **If I accidentally run a script to restore this database as "NoRecovery or Recovery" mode, is there way to change the mode back to "Standby" by running a script or do I have to restore full bak file again as "Standby" (do the whole process again)?**
Java
(253 rep)
May 14, 2021, 05:13 PM
• Last activity: May 14, 2021, 10:47 PM
Showing page 1 of 20 total questions