Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
0
answers
59
views
node-oracledb Connection Pool Not Releasing Idle Connections Back to poolMin
We're experiencing a steady increase in PGA memory in our Oracle database, and we're trying to identify the root cause. We suspect that the connection pool in our application is not releasing connections back to the database as expected. Here are the details of our configuration and the issue we're...
We're experiencing a steady increase in PGA memory in our Oracle database, and we're trying to identify the root cause. We suspect that the connection pool in our application is not releasing connections back to the database as expected. Here are the details of our configuration and the issue we're facing:
**Connection Pool Configuration:**
-
poolMin
: 5
- poolMax
: 10
- poolIncrement
: 1
- poolTimeout
: 10 seconds
**Issue Description:** During periods of traffic, the number of connections increases from 5 (poolMin
) to 10 (poolMax
). However, when the traffic is low, the connections are not being released back to 5 (poolMin
), even after 10 seconds (poolTimeout
) of inactivity.
**Reference:** According to the [node oracledb documentation](https://node-oracledb.readthedocs.io/en/latest/api_manual/oracledb.html#oracledb.poolTimeout) :
> If the application returns connections to the pool with connection.close(), and the connections are then unused for more than poolTimeout seconds, then any excess connections above poolMin will be closed. When using Oracle Client prior to version 21, this pool shrinkage is only initiated when the pool is accessed.
Any insights or suggestions would be greatly appreciated.
**Additional Info:**
- We use _thick_ mode
Let me know if you need me to add anything to provide better answers.
**What I Tried:**
1. **Monitoring Connections**: Used the pool.getStatistics()
method to monitor the number of open, in-use, and idle connections in the pool.
2. **Traffic Simulation**: Simulated traffic using a k6 script to observe the behavior of the connection pool during periods of high and low traffic.
3. **Database Query**: Ran a query to monitor active connections and session PGA memory in the Oracle database.
**What I Expected:**
1. During periods of high traffic, I expected the number of connections to increase from 5 (poolMin
) to 10 (poolMax
).
2. During periods of low traffic, I expected the number of connections to decrease back to 5 (poolMin
) after 10 seconds (poolTimeout
) of inactivity.
3. I expected the session PGA memory to decrease correspondingly as idle connections are terminated.
**What Actually Happened:**
1. During high traffic, the number of connections increased to 10 as expected.
2. During low traffic, the number of connections did not decrease back to 5, even after 10 seconds of inactivity. Though sometimes, it decreased one by one, but not all the way back to 5 (poolMin
value)
3. The session PGA memory did decrease to some extent, but few idle connections were not being terminated.
**Question**: What could be the possible reasons for the connection pool not releasing idle connections back to poolMin
?
Srikanth Vudharapu
(1 rep)
Nov 2, 2024, 06:00 AM
• Last activity: Nov 8, 2024, 11:30 AM
0
votes
0
answers
182
views
What’s wrong with provided apex instance administrator password
1 I have installed Oracle Database 21c Enterprise Edition on Oracle windows 10 I am now trying to install Apex 22.2 . Got to the chapter "5.4.2.3 Running apxchpwd.sql" to set the instance administrator password. Executed the @apxchpwd.sql command, it asks me for user name, e-mail and password but th...
1
I have installed Oracle Database 21c Enterprise Edition on Oracle windows 10 I am now trying to install Apex 22.2 .
Got to the chapter "5.4.2.3 Running apxchpwd.sql" to set the instance administrator password. Executed the @apxchpwd.sql command, it asks me for user name, e-mail and password but then it gives an error as quoted string not properly ended, also Tried various other passwords, nothing seems to be acceptable.
What can be the problem?
Prabhali Pawar
(1 rep)
Dec 15, 2022, 12:17 PM
• Last activity: Dec 15, 2022, 12:27 PM
7
votes
1
answers
4146
views
Network design considerations for Oracle Database Appliance
With the introduction of Oracle Engineered Systems the DBA is moved somewhat closer to infrastructure design decisions, and expected to at least have some opinions of the network design requirements for the database. At least that is the situation I find myself in :) After deploying an ODA for testi...
With the introduction of Oracle Engineered Systems the DBA is moved somewhat closer to infrastructure design decisions, and expected to at least have some opinions of the network design requirements for the database. At least that is the situation I find myself in :)
After deploying an ODA for testing, I find myself with the current setup:
System Controller 0 has the public bonded interface (bond0) connected to a typical edge switch, a Catalyst 2960 series. A management interface (bond1) is connected to a second edge switch of the same type.
System Controller 1 similarly has it's public interface connected to the second switch, while the management interface is connected to the first switch.
This way, if one of the switches goes down, an operator will be able to reach each system controller either via the public or the management interface to facilitate diagnostics.
On the Cisco end of things, EtherChannel groups are configured for the 4 bonded interfaces of the ODA. The two switches are individually wired to the rest of the network, with no direct links between the two.
At first glance this does look like a reasonable design, but the more I think about different fault scenarios the more questions I seem to come up with.
Taking into consideration that these edge-type switches are not in themselves redundant, it seems rather important that the cluster can deal with one switch becoming unavailable due to power supply failure, or one switch failing to forward packets.
The database clients (zend server application servers in this case) are each similarly connected with a bonded interface to only to one of the two switches. This brings up some questions with regard to load balancing: The way I understand 11gR2 RAC, simply connecting to the SCAN address will quite possibly let the client go the long way to the main network and back through the other switch, which can hardly be considered to be very efficient.
What happens if a switch fails or stops forwarding packets? Will connections find the accessible VIP listener through SCAN? Will RAC somehow detect the network fault and move the SCAN and VIP to the System Controller with a working and accessible public interface? I honestly can't see how it would.
And while clients taking the long way through the core network and back is acceptable during a failover scenario, it sure would be nice to avoid it in normal production.
I'm sure Oracle has a very clear idea of how this should all work together, but I'm afraid I just don't see it all that clearly.
Is it possible to achieve full redundancy with edge-class/non-reduntant switches? Can we somehow add some control on where client connections are routed in production and failover situations? Perhaps there is a good way to interconnect the two switches to allow traffic directly between clients on one switch and database listener on the other?
At this point I'm looking for any best practices and fundamental network design considerations that should be applied to a typical high availability ODA implementation.
Hopefully this will then be of use to any DBA that is faced with making network design decisions for their ODA :)
**Update:**
The ODA is configured with bonds in active-backup configuration. I think this may allow for a setup where each interface on the bond is connected to a different switch, without any switch side configuration.
Anyone know if this is the case?
[root@oma1 ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth2
Roy
(1060 rep)
Oct 17, 2012, 03:03 PM
• Last activity: Jul 3, 2021, 10:31 PM
8
votes
2
answers
275868
views
How to monitor space usage on ASM diskgroups
Last night the recovery area on one of our Oracle Database Appliances went full. This was reported in one of the database alert logs, and we were able to clear out some space before the next log switch, at which point the production would have come to a halt. It certainly would have been nice to hav...
Last night the recovery area on one of our Oracle Database Appliances went full. This was reported in one of the database alert logs, and we were able to clear out some space before the next log switch, at which point the production would have come to a halt.
It certainly would have been nice to have a little more warning, like when the disk group was 70% full.
What options do we have for monitoring disk usage inside ASM?
Roy
(1060 rep)
Mar 7, 2013, 10:15 AM
• Last activity: Oct 25, 2019, 01:38 PM
5
votes
3
answers
4459
views
How many control files should I have?
On the `Oracle Database Appliance`, the default deployment only gives you a single `control file`. I find this a little puzzling. The single control file causes a policy violation in the automatically configured Enterprise Manager DB Console and Oracle's recommendation is still, as far as I can tell...
On the
Oracle Database Appliance
, the default deployment only gives you a single control file
.
I find this a little puzzling. The single control file causes a policy violation in the automatically configured Enterprise Manager DB Console and Oracle's recommendation is still, as far as I can tell, that you should always have at least two control files on separate drives and file systems. Personally, I've always had three copies, just to be on the safe side.
The ODA is configured with ASM and does have a good bit of storage redundancy using triple mirrored drives. Is it OK to run with a single control file in this configuration?
Adding a second control file to the same disk group might not make much sense, would multiplexing the control files to the SSD disk group or perhaps the OS drives of each node make more sense?
Roy
(1060 rep)
Dec 10, 2012, 12:55 PM
• Last activity: Aug 31, 2016, 09:00 PM
4
votes
1
answers
1634
views
Altering the location of Oracle-Suggested Backup
On one database, the Oracle-Suggested Backup scheduled from Enterprise Manager always ends up in the recovery area, despite RMAN configuration showing that device type disk format points elsewhere. As far as I can see, the scheduled backup job is simply: run { allocate channel oem_disk_backup device...
On one database, the Oracle-Suggested Backup scheduled from Enterprise Manager always ends up in the recovery area, despite RMAN configuration showing that device type disk format points elsewhere.
As far as I can see, the scheduled backup job is simply:
run {
allocate channel oem_disk_backup device type disk;
recover copy of database with tag 'ORA_OEM_LEVEL_0';
backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA_OEM_LEVEL_0' database;
}
Asking RMAN to
show all
reveals that device type disk is indeed configured to store elsewhere:
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/s01/backup/PROD11/PROD11_%U';
If I run the script manually, the backupset is placed at the above location, when the script is run from the job scheduler the backupset goes to the RECO group on ASM,
Why might Oracle still choose to dump the backupset to the db_recovery_file_dest
?
Ultimately, how can I change the backup destination?
Roy
(1060 rep)
Jul 13, 2013, 03:29 AM
• Last activity: Jul 3, 2016, 07:17 AM
2
votes
0
answers
2484
views
What is the nature of "LGWR: Setting 'active' archival for destination LOG_ARCHIVE_DEST_2"?
I'm not sure when this started, but for the time being the primary database instances in a physical standby configuration logs this message on every redo log switch. ****************************************************************** LGWR: Setting 'active' archival for destination LOG_ARCHIVE_DEST_2...
I'm not sure when this started, but for the time being the primary database instances in a physical standby configuration logs this message on every redo log switch.
******************************************************************
LGWR: Setting 'active' archival for destination LOG_ARCHIVE_DEST_2
******************************************************************
LGWR: Standby redo logfile selected to archive thread 2 sequence 306
LGWR: Standby redo logfile selected for thread 2 sequence 306 for destination LOG_ARCHIVE_DEST_2
I'm not sure what this indicates, but I do associate this message with a standby database becoming available after downtime or network outage. E.g. the (optional) archive destination was deferred, but has now become active.
Similarly, on the standby instance, I'm seeing the following on every log switch:
Primary database is in MAXIMUM PERFORMANCE mode
Re-archiving standby log 6 thread 2 sequence 308
RFS: Assigned to RFS process 8304
RFS: Selected log 8 for thread 2 sequence 309 dbid 18783000 branch 804771032
Again, this is a log message I associate with a disruption in the delivery of archived redo logs or a change of delivery mode (changing the log destination parameters or the protection mode).
Any idea what may be causing these to be logged repeatedly on every log switch?
Update:
I now have a working theory: I think this will happen if real time apply have been used at all, and thus indicates that
LGWR
is waking up to do some archival work. Previously, I believe ARCH
would take care of log transport and archival but activating real time apply may have forced a configuration change.
So far, I have found no evidence of this change being visible in data dictionary or parameter set, just the changes in alert log messages (which happened to coincide with a test involving real time apply on the standby).
*If this is the case, any idea how to reverse the change now that real time log apply is disabled and no longer required?*
Roy
(1060 rep)
Jan 18, 2013, 02:02 PM
• Last activity: Jan 18, 2013, 02:56 PM
2
votes
2
answers
19645
views
How high can I set undo retention on Oracle 11gR2
Is it viable to set a very high undo retention, as to allow flashback queries several weeks back in time? Naturally, enough space must be available in the undo tablespaces to contain the amount of undo data required. Are there other limitations I should be aware of? What happens if there is not enou...
Is it viable to set a very high undo retention, as to allow flashback queries several weeks back in time?
Naturally, enough space must be available in the undo tablespaces to contain the amount of undo data required. Are there other limitations I should be aware of?
What happens if there is not enough undo space available? Will production be affected in any way, or is it just a matter of flashback and rollback being limited (snapshot too old etc).
Update:
With a typical undo generation of a little less than 1 GB per day per instance, and up to 64 GB worth of undo space per instance it sounds viable to run with a 30-day undo retention target. No?
Roy
(1060 rep)
Nov 13, 2012, 08:36 AM
• Last activity: Nov 13, 2012, 06:17 PM
2
votes
1
answers
22706
views
Oracle 11gR2 archive log destinations
I want to add a second archive log destination to an Oracle 11gR2 RAC database with ASM, the idea is that I will then have some redundancy if the primary storage should fail. Archive log is already enabled, and logs are currently archived in the Fast Recovery Area. However, as far as I can see, none...
I want to add a second archive log destination to an Oracle 11gR2 RAC database with ASM, the idea is that I will then have some redundancy if the primary storage should fail.
Archive log is already enabled, and logs are currently archived in the Fast Recovery Area. However, as far as I can see, none of the LOG_ARCHIVE_DEST_n init parameters have been configured.
Is there now an unset default that specifies the Fast Recovery Area as a log destination?
If that is the case, I assume I must now configure two destinations. One entry to continue writing archive to that default destination, and one entry for the additional backup destination. If so, how do I specify the existing default location in the fast recovery area?
Will this work, is there another preferred way?
alter system set log_archive_dest_1 = 'LOCATION=USE_DB_RECOVERY_FILE_DEST';
alter system set log_archive_dest_2 = 'LOCATION=/s01/archive/TESTDB';
alter system set log_archive_dest_state_1 = enable;
alter system set log_archive_dest_state_2 = enable;
alter system set log_archive_min_succeed_dest = 1;
Naturally, the /s01 filesystem is available on all (both) cluster nodes.
Roy
(1060 rep)
Oct 29, 2012, 10:18 AM
• Last activity: Oct 29, 2012, 10:27 PM
3
votes
1
answers
13244
views
Add an additional listener to Oracle Database Appliance
After deployment of Oracle Database Appliance with OAK 2.3.0 the standard listeners are configured for the public network on bond0. That includes SCAN listeners and one VIP listener per node. This last one seems to listen on both the public network interface and the VIP interface. However, I would a...
After deployment of Oracle Database Appliance with OAK 2.3.0 the standard listeners are configured for the public network on bond0. That includes SCAN listeners and one VIP listener per node. This last one seems to listen on both the public network interface and the VIP interface.
However, I would also like something to listen on the management network that I have configured at bond1.
What would be the preferred way of accomplishing this? Can I make the VIP listener also listen to bond1 or must I add a new listener for this?
Can I just add a listener with:
srvctl add listener -p TCP:1521 -o /u01/app/11.2.0.3/grid
?
I'm at a loss to see where the ip address or interface goes into the configuration.
**Update:**
Support note 1063571.1 "How to Configure A Second Listener on a Separate Network in 11.2 Grid Infrastructure" covers the addition of a second production listener with associated VIP interfaces (but no second SCAN listener) to a generic 11gR2 RAC.
In this case, as this is meant to be a back door for operators and DBAs, I think it might be OK to have a basic listener directly on the physical interface on each node - eg. one that does not fail over, and only connects to the instances on that specific node.
I suppose there could also be ODA specific considerations that need to be taken into account, although none are too clear to me at this point.
Roy
(1060 rep)
Oct 17, 2012, 01:51 PM
• Last activity: Oct 24, 2012, 10:13 AM
Showing page 1 of 10 total questions