Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
167 views
PostgreSQL: Database Layer Pooling v/s Application Layer pooling
I would like to have some recommendations about connection pooling in PostgreSQL. We already have `connection pooling` using `Pgbouncer` behind an `Azure LB`. The pgbouncer itself is working pretty much good and later we planning to change pooling method to `transaction`. However we recently found t...
I would like to have some recommendations about connection pooling in PostgreSQL. We already have connection pooling using Pgbouncer behind an Azure LB. The pgbouncer itself is working pretty much good and later we planning to change pooling method to transaction. However we recently found that the nodejs client lib has connection pooling using pg.pool routed to LB. Now we have two layer of pooling one at application layer and other one at the db. Is it a good practice to have multiple layer of pooling? Or just pgbouncer enough to handle this considering the transaction pooling method? What are the things to consider which layer of pooling best for our system?
goodfella (595 rep)
Mar 15, 2024, 01:57 AM • Last activity: Jul 13, 2025, 02:05 PM
0 votes
1 answers
228 views
Estimating a max poolsize in azure postgres
[In this article][1] , I found the formula to get an estimate of the max poolsize value: connections avg_parallelism is the average number of backend processes working on a single query. I'd really like some help to understand what these parameters really are and how to find them, also some more exp...
In this article , I found the formula to get an estimate of the max poolsize value: connections avg_parallelism is the average number of backend processes working on a single query. I'd really like some help to understand what these parameters really are and how to find them, also some more explanation of the system_busy_ratio could be helpful.
troubleddev (13 rep)
Feb 27, 2023, 04:06 AM • Last activity: Jun 11, 2025, 04:07 PM
1 votes
0 answers
65 views
Acquiring conneciton from pgxpool takes longer with PGBouncer
My web server is deployed on Kubernetes with horizontal pod scaling and a separate, non auto-scaling, PostgreSQL service which runs a both a master and readonly replica nodes, with high-availability on failover. The web server is written in Go and uses PGX v5 for all interactions with PostgreSQL. Th...
My web server is deployed on Kubernetes with horizontal pod scaling and a separate, non auto-scaling, PostgreSQL service which runs a both a master and readonly replica nodes, with high-availability on failover. The web server is written in Go and uses PGX v5 for all interactions with PostgreSQL. This includes creating a connection pool (pgxpool) on pod startup, with N connections. N is such a number that when multiplied by the maximum number of pods is smaller than the maximum connections allowed by the PostgreSQL database. Usually N is between 10 and 20. In an effort to try and optimize performance under load, I notice that there is some contention over connection acquisition on pods, namely pool.acquire inside pgx is taking longer under high load. My assumption is that this is due to having the limit on N concurrent open transactions from the pod to the database. With each concurrent transaction (each request to the web server uses one transaction while it's handled) 'hogging' one connection, so whenever a new request to the server arrives, it needs to wait for one of the open transaction to finish if all N connections are in use. This scenario makes sense, especially when the HPA is scaled down as there are less pods and the same amount of maximum database connections (N) per pod. I'm experimenting with PGBouncer in an effort to address this but I see some unexpected results. My hypothesis was that enabling PGBouncer in 'transaction' mode would allow me to increase N, since the actual management of connection is abstracted away to PGBouncer. I configured PGBouncer to have about 80% of the max connections the DB can take (680 outnof 800) and increased N by a multiply of 5 (from 10 to 50). Looking at database metrics, I do observe less open connections to the database which is expected due to PGBouncer using connections in a smarter way. However, I also observe two unexpected behaviors: Increased CPU on the database. I imagine that this is due to PGBouncer process using resources, but it's in some scenarios it is not something I can accept. I think that perhaps reducing the number of connections dedicated to PGBouncer from 80% to a lower figure would help. The average duration of pool.acquire on the web server increased rather than decreasing. This is despite of increasing N to be sufficiently high - that is higher than the number of incoming concurrent transactions from each pod to the database. I've a strong feeling I'm doing something wrong. Should I stop using pgxpool in tandem with PGBouncer? Where can I look to pinpoint the reason for increased duration of connection acquisition from pgxpool?
Alechko (229 rep)
Apr 15, 2025, 06:09 PM
0 votes
1 answers
783 views
pooler error: query_wait_timeout in pgbouncer logs
I am getting `pooler error: query_wait_timeout` for different databases in the pgbouncer log 6 to 7 times a day . When the error occurs for masterdata database , it gives me a timeout in the application. The configuration is below : ```auth_type: "md5" default_pool_size: 10 max_client_conn: 5000 max...
I am getting pooler error: query_wait_timeout for different databases in the pgbouncer log 6 to 7 times a day . When the error occurs for masterdata database , it gives me a timeout in the application. The configuration is below :
: "md5"
  default_pool_size: 10
  max_client_conn: 5000
  max_db_connections: 400
  min_pool_size: 0
  pool_mode: "transaction"
  reserve_pool_size: 20
  reserve_pool_timeout: 1
  query_timeout: 0
  ignore_startup_parameters: "search_path, extra_float_digits"
  server_reset_query: "DISCARD ALL"
  server_reset_query_always: 1
As the query timeout is disabled (query_timeout:0) and the pool_mode is set to transaction, why am I still having this error? Do I need to set some min_pool_size for my masterdata database to eliminate the above error?
kumar ashish (1 rep)
Jan 8, 2024, 07:00 AM • Last activity: Apr 8, 2025, 12:11 PM
0 votes
1 answers
11685 views
Tomcat7 JDBC connection pool -> Connection has been abandoned
I have the following configuration for the JDBC connection pool initialSize="2" minIdle="10" maxIdle="50" maxActive="100" maxWait="15000" jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer;org.apache.tomcat.jdbc.pool.i...
I have the following configuration for the JDBC connection pool initialSize="2" minIdle="10" maxIdle="50" maxActive="100" maxWait="15000" jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer;org.apache.tomcat.jdbc.pool.interceptor.ResetAbandonedTimer" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true" testOnBorrow="true" testOnReturn="true" testWhileIdle="true" useEquals="false" fairQueue="false" timeBetweenEvictionRunsMillis="300000" minEvictableIdleTimeMillis="300000" validationInterval="10000" validationQuery="SELECT 1" The application is a standard Java+Spring+Hibernate environment, and the database is PostgreSQL. I have been using those configs for a while, and everything was fine... Recently the following exception started to appear Sep 10, 2015 5:25:57 PM org.apache.tomcat.jdbc.pool.ConnectionPool abandon WARNING: Connection has been abandoned PooledConnection[org.postgresql.jdbc4.Jdbc4Connection@3eb948ed]:java.lang.Exception at org.apache.tomcat.jdbc.pool.ConnectionPool.getThreadDump(ConnectionPool.java:1063) at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:780) at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:619) at org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:188) at org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:128) at org.hibernate.ejb.connection.InjectedDataSourceConnectionProvider.getConnection(InjectedDataSourceConnectionProvider.java:70) at org.hibernate.internal.AbstractSessionImpl$NonContextualJdbcConnectionAccess.obtainConnection(AbstractSessionImpl.java:301) at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:214) at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.getConnection(LogicalConnectionImpl.java:157) at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doBegin(JdbcTransaction.java:67) at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.begin(AbstractTransactionImpl.java:160) at org.hibernate.internal.SessionImpl.beginTransaction(SessionImpl.java:1351) at org.hibernate.ejb.TransactionImpl.begin(TransactionImpl.java:59) at org.springframework.orm.jpa.DefaultJpaDialect.beginTransaction(DefaultJpaDialect.java:67) at org.springframework.orm.jpa.vendor.HibernateJpaDialect.beginTransaction(HibernateJpaDialect.java:110) at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:380) at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:373) at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:420) at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:257) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:95) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:644) 10 Sep 2015 17:25:57,591 WARN org.hibernate.engine.jdbc.spi.SqlExceptionHelper - SQL Error: 0, SQLState: 08006 10 Sep 2015 17:25:57,592 ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper - An I/O error occurred while sending to the backend. javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: could not extract ResultSet at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1387) at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1310) at org.hibernate.ejb.QueryImpl.getSingleResult(QueryImpl.java:316) at com.mysema.query.jpa.impl.AbstractJPAQuery.getSingleResult(AbstractJPAQuery.java:219) at com.mysema.query.jpa.impl.AbstractJPAQuery.uniqueResult(AbstractJPAQuery.java:295) at com.mysema.query.jpa.impl.AbstractJPAQuery.uniqueResult(AbstractJPAQuery.java:281) at com.mysema.query.support.ProjectableQuery.singleResult(ProjectableQuery.java:75) I don't have any new query running too long, but I guess that this happens because the connection is not returned to the pool properly... I tried to play around the configurations, but the same happens, though in rare cases . I'd just like to know if the configurations are fine for the application with an average load, and if there is any chance to improve the connection pool life. Note: When I decrease
and
it starts to happen more often. Any help is appreciated.
vtor (109 rep)
Sep 11, 2015, 09:14 AM • Last activity: Feb 13, 2025, 12:06 PM
0 votes
1 answers
74 views
Does Analysis Services provide connection pooling
We know that when creating a SQL connection, for example: ```lang-cs using (SqlConnection sqlConn = new SqlConnection(conString)); ``` we are actually not always creating a new connection to the server, but rather we are taking an already open connection from the [connection pool][1]. --- However, w...
We know that when creating a SQL connection, for example:
-cs
using (SqlConnection sqlConn = new SqlConnection(conString));
we are actually not always creating a new connection to the server, but rather we are taking an already open connection from the connection pool . --- However, when we open a connection to the Analysis Services, do we also use some kind of connection pooling?
-cs
using (AdomdConnection adomdConn = new AdomdConnection(conString));
Does this always lead to a new connection or does it take one from the pool if possible? I am not able to infer this from the official documentation , but I have found mostly older unofficial articles that clearly state there is no connection pooling for AdomdConnection : - SQL Server 2005 Analysis Services’s ADOMD.NET Connection Pooling, or Lack Thereof - ADOMD.NET Connection Pooling
PajLe (133 rep)
Apr 25, 2024, 08:36 PM • Last activity: Dec 31, 2024, 04:30 PM
0 votes
0 answers
59 views
node-oracledb Connection Pool Not Releasing Idle Connections Back to poolMin
We're experiencing a steady increase in PGA memory in our Oracle database, and we're trying to identify the root cause. We suspect that the connection pool in our application is not releasing connections back to the database as expected. Here are the details of our configuration and the issue we're...
We're experiencing a steady increase in PGA memory in our Oracle database, and we're trying to identify the root cause. We suspect that the connection pool in our application is not releasing connections back to the database as expected. Here are the details of our configuration and the issue we're facing: **Connection Pool Configuration:** - poolMin: 5 - poolMax: 10 - poolIncrement: 1 - poolTimeout: 10 seconds **Issue Description:** During periods of traffic, the number of connections increases from 5 (poolMin) to 10 (poolMax). However, when the traffic is low, the connections are not being released back to 5 (poolMin), even after 10 seconds (poolTimeout) of inactivity. **Reference:** According to the [node oracledb documentation](https://node-oracledb.readthedocs.io/en/latest/api_manual/oracledb.html#oracledb.poolTimeout) : > If the application returns connections to the pool with connection.close(), and the connections are then unused for more than poolTimeout seconds, then any excess connections above poolMin will be closed. When using Oracle Client prior to version 21, this pool shrinkage is only initiated when the pool is accessed. Any insights or suggestions would be greatly appreciated. **Additional Info:** - We use _thick_ mode Let me know if you need me to add anything to provide better answers. **What I Tried:** 1. **Monitoring Connections**: Used the pool.getStatistics() method to monitor the number of open, in-use, and idle connections in the pool. 2. **Traffic Simulation**: Simulated traffic using a k6 script to observe the behavior of the connection pool during periods of high and low traffic. 3. **Database Query**: Ran a query to monitor active connections and session PGA memory in the Oracle database. **What I Expected:** 1. During periods of high traffic, I expected the number of connections to increase from 5 (poolMin) to 10 (poolMax). 2. During periods of low traffic, I expected the number of connections to decrease back to 5 (poolMin) after 10 seconds (poolTimeout) of inactivity. 3. I expected the session PGA memory to decrease correspondingly as idle connections are terminated. **What Actually Happened:** 1. During high traffic, the number of connections increased to 10 as expected. 2. During low traffic, the number of connections did not decrease back to 5, even after 10 seconds of inactivity. Though sometimes, it decreased one by one, but not all the way back to 5 (poolMin value) 3. The session PGA memory did decrease to some extent, but few idle connections were not being terminated. **Question**: What could be the possible reasons for the connection pool not releasing idle connections back to poolMin?
Srikanth Vudharapu (1 rep)
Nov 2, 2024, 06:00 AM • Last activity: Nov 8, 2024, 11:30 AM
0 votes
1 answers
185 views
Why doesn't DISCARD PLANS/ALL work for queries inside PLPGSQL functions?
I'm experiencing unexpected behavior with query plan caching in PostgreSQL when using different function types. I have a dashboard query that exhibits significantly different performance characteristics based on how it's implemented: - Raw SQL Query: ~200ms execution time - SQL Function: ~2s executi...
I'm experiencing unexpected behavior with query plan caching in PostgreSQL when using different function types. I have a dashboard query that exhibits significantly different performance characteristics based on how it's implemented: - Raw SQL Query: ~200ms execution time - SQL Function: ~2s execution time - PLPGSQL Function: First 5 calls: ~200ms, subsequent calls: ~2s The most puzzling part is that DISCARD PLANS/ALL doesn't help with the PLPGSQL function performance degradation, even though it should clear the plan cache. Each implementation uses the same SQL query, but in functions the query is parametrized. For example in functions there is condition like this: (_types is null or type = any (_types)), which in plain SQL tests traslates to (null is null or type = any (null::int[])) Environment - database functions are used as a database interface for Node REST API - API uses connection pooling in session mode (each qurey has its own session) - After each query, the connection is returned to the pool and DISCARD ALL is executed - PostgreSQL v13.13 running on CentOS 8 Questions - Why doesn't DISCARD PLANS/ALL help with the PLPGSQL function's performance? It seems that it does not clear session state completely. - Is there a way to maintain the original performance while keeping the query inside a PLPGSQL function?
DavidP (203 rep)
Nov 6, 2024, 12:57 PM • Last activity: Nov 6, 2024, 01:19 PM
0 votes
0 answers
39 views
Oracle Pro*C/C++ can I reuse oracle connections between multiple processes
I have old toolkit of Oracle Pro*C applications, where each tool connects to same OracleDB (via TNS alias) and performs some fairly quick SQL query and exits. In daily use these apps are organized by external orchestrator in a bulky file processing workflow, and each app is executed thousands times...
I have old toolkit of Oracle Pro*C applications, where each tool connects to same OracleDB (via TNS alias) and performs some fairly quick SQL query and exits. In daily use these apps are organized by external orchestrator in a bulky file processing workflow, and each app is executed thousands times per hour.. Obviously, due to latency creating connection each time, it all working slow, while CPU/IO load is fairly low about 20-40 requests/runs per min. I cannot refactor the workflow much, neither the application's code. Can I setup so-called **connection pool** on the Oracle Client 12c client/driver? so each time a new process opens new oracle session, it will not communicate the server, but rather reuse some existing local session? The Pro*C preprocessor seems support CPOOL=YES option, but the documentation says about it only: > *Based on this option, the precompiler generates the appropriate code > that directs SQLLIB to enable or disable the connection pool feature*. If I recompile all my "mini-programs" with this option (end enable connection pool on the server side of course), will the separate linux processes use this pool, or this feature is only for heavy applications with multithreaded architecture? I hope that if it is possible, then avoiding lots of sessions reinitialisations will improve overall performance of my system without major refactoring.
Fedor (1 rep)
Oct 16, 2024, 05:07 PM
0 votes
0 answers
37 views
What exactly is a connection pool at a low level?
The idea of a connection pool is simple enough to understand: caching connections (say, to a database) to avoid the overhead of opening and closing connections. But what does opening (and closing) connections and caching them *look* like? I think of connection pooling as conceptually analogous to th...
The idea of a connection pool is simple enough to understand: caching connections (say, to a database) to avoid the overhead of opening and closing connections. But what does opening (and closing) connections and caching them *look* like? I think of connection pooling as conceptually analogous to thread pooling. Thread initialization includes stack allocation, construction of the TCB, and then a context switch. (I may perhaps be glossing over some details here.) I'm clueless as to what connection initialization entails. Is there a connection control block? How can different clients reuse connections from the same shared pool? Naively, I would think that connections are specified by their end points.
wanderingsoul (1 rep)
Aug 25, 2024, 02:49 AM
0 votes
1 answers
3044 views
Load Balancing PG Bouncer With HA Proxy
I have a server with two instances of PG Bouncer running on different ports. Both are using session mode to pool. I set up HA Proxy to load balance incoming connections between these two processes to distribute the load. I get "server closed the connection unexpectedly" intermittently when connectin...
I have a server with two instances of PG Bouncer running on different ports. Both are using session mode to pool. I set up HA Proxy to load balance incoming connections between these two processes to distribute the load. I get "server closed the connection unexpectedly" intermittently when connecting through HA Proxy. If I connect directly to one of the PG Bouncer instances I do not receive these errors. Has anyone setup something similar to this?
user3443757 (101 rep)
Nov 14, 2017, 04:20 PM • Last activity: May 21, 2024, 04:07 PM
0 votes
1 answers
102 views
Pgbouncer: Can pgbouncer take reserve_pool_timeout in milliseconds?
Is it possible to specify `pgbouncer` `reserve_pool_timeout` in milliseconds, I have tried specifying it with '500 ms' but throwing an error and then tried specifying 0.5, this time no error but unable to find out it's really working or just disabled the timeout but reserve_pool are being utilised.
Is it possible to specify pgbouncer reserve_pool_timeout in milliseconds, I have tried specifying it with '500 ms' but throwing an error and then tried specifying 0.5, this time no error but unable to find out it's really working or just disabled the timeout but reserve_pool are being utilised.
goodfella (595 rep)
May 17, 2024, 04:53 AM • Last activity: May 17, 2024, 06:32 AM
4 votes
2 answers
3582 views
Shared connection pool in SQL Server and prevention of cross-database queries
[Microsoft recommends][1] that a shared connection pool should be used when running e.g. a SaaS software with one database per customer but with a shared codebase to prevent pool fragmentation. When taking out a connection from the pool the database to access should be first be changed by the comman...
Microsoft recommends that a shared connection pool should be used when running e.g. a SaaS software with one database per customer but with a shared codebase to prevent pool fragmentation. When taking out a connection from the pool the database to access should be first be changed by the command USE Database_A. However, as I see it, all the databases would then use the same login with let's say the db_datareader role, right? Is there any easy way to prevent the user from making cross-database queries or directly querying other databases using a syntax like SELECT * FROM Database_B.dbo.Person?
brunnsbe (61 rep)
May 20, 2015, 01:30 PM • Last activity: Mar 14, 2024, 11:42 AM
0 votes
1 answers
188 views
Select query with 90+ parameters intermittently 20x slower on RDS PostgreSQL instance when using connection pool
I'm running a select statement on a postgresql table (on an RDS instance) in order to sum up values across 2 mil rows. The select statement has about 100 parameters (mostly different countries we want to sum across). When I run this query once from my application it'll execute in roughly 500ms. Howe...
I'm running a select statement on a postgresql table (on an RDS instance) in order to sum up values across 2 mil rows. The select statement has about 100 parameters (mostly different countries we want to sum across). When I run this query once from my application it'll execute in roughly 500ms. However when I execute it several times in a row, the query latency will suddenly increase by _10-20 times_ to roughly 5 to 10 seconds after 8 or 9 runs. (These measurements are server side from the RDS logs where I've enabled log_min_duration_statement). **What could possibly cause this jump in execution time?** -- I'd expect the query to be slower on the initial run, but I'm surprised to see it increase in latency after several consequtive runs. A few key observations: * **Connection Pool**: The exact connection pool doesn't seem to matter. The problem persists whether I use [HikariCP](https://github.com/brettwooldridge/HikariCP) or [C3P0](https://www.mchange.com/projects/c3p0/) * **Single Connection**: This issue _doesn't_ happen if I configure my connection pool to have only a single connection (so the application logic is not at fault) * **PSQL**: This issue also doesn't happen if I benchmark the query consequetively using psql to execute it. * **Query Parameters**: When I reduce the amount of query parameters to ~60 I don't see any issues. It's only when I query for 90+ territories that I run in to this behaviour. * **Pool Reset**: When I restart my application (and hence reset the connection pool), the queries go back to being fast (0.5s) for another 8-9 queries before being slow again (5s). I've tried to look out for the usual suspects like RDS IOPS Burst throttling, database load, and memory consumption, available connections, but nothing looks suspicious on this front. I've also made sure that the index the query is using and the rows returned from the table are cached. The query I'm running has the following shape. I've truncated some of the 90 country IDs:
SELECT d.primary_title_no,
       SUM(d.cume) AS lifetime
  FROM schema.my_data AS d 
 WHERE d.currency = 'USD'
   AND d.date >= '2020-01-01'::date + (7 * (d.week - 1))
   AND d.date   Sort  (cost=1218692.98..1218697.34 rows=1742 width=12) (actual time=369.145..369.154 rows=100 loops=1)                                                                                                                                                                                                                                                                           |
|         Sort Key: (sum(cume)) DESC                                                                                                                                                                                                                                                                                                                                                     |
|         Sort Method: top-N heapsort  Memory: 33kB                                                                                                                                                                                                                                                                                                                                      |
|         ->  GroupAggregate  (cost=1218595.34..1218626.41 rows=1742 width=12) (actual time=366.250..368.653 rows=1959 loops=1)                                                                                                                                                                                                                                                          |
|               Group Key: primary_title_no                                                                                                                                                                                                                                                                                                                                              |
|               ->  Sort  (cost=1218595.34..1218599.89 rows=1820 width=12) (actual time=366.240..366.844 rows=5739 loops=1)                                                                                                                                                                                                                                                              |
|                     Sort Key: primary_title_no                                                                                                                                                                                                                                                                                                                                         |
|                     Sort Method: quicksort  Memory: 462kB                                                                                                                                                                                                                                                                                                                              |
|                     ->  Index Scan using idx_dailies_currency_ter_id_primary_title_no_lifetime on dailies d  (cost=0.43..1218496.79 rows=1820 width=12) (actual time=1.093..364.406 rows=5739 loops=1)                                                                                                                                                                                 |
|                           Index Cond: (((currency)::text = 'USD'::text) AND ((ter_id)::text = ANY ('{AE,AM,AR,AT,AU,AZ,BA,BE,BG,BH,BO,BR,BY,CH,CL,CN,CO,CR,CU,CZ,DE,DK,DO,EC,EE,EG,ES,FI,FR,GE,GR,GT,HK,HN,HR,HU,IL,IN,IQ,IS,IT,JP,KG,KR,KW,KZ,LB,LT,LU,LV,MD,MX,MY,MZ,NI,NL,NO,NZ,OM,PA,PE,PH,PL,PT,PY,QA,RO,RS,RU,SA,SE,SG,SI,SK,SV,TH,TJ,TM,TR,TT,TW,UK,UP,UY,UZ,WA,ZA}'::text[]))) |
|                           Filter: (((date - (7 * (week - 1))) >= '2020-01-01'::date) AND ((date - (7 * (week - 1))) < '2020-06-01'::date))                                                                                                                                                                                                                                             |
|                           Rows Removed by Filter: 314494                                                                                                                                                                                                                                                                                                                               |
| Planning Time: 0.613 ms                                                                                                                                                                                                                                                                                                                                                                |
| Execution Time: 369.213 ms                                                                                                                                                                                                                                                                                                                                                             |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
For a case where the query executes slowly the explain analyze output is the same, but 20 times the compute is spent in the innermost Index Scan step. If anybody has any pointers to how I can troubleshoot this issue, I'd be all ears.
Arnfred (101 rep)
Feb 15, 2024, 06:01 PM • Last activity: Feb 19, 2024, 09:31 AM
3 votes
2 answers
3266 views
SSMS How do I open a new Query Window using Same Connection Instance of current Window
Coming from Oracle I have discovered that SSMS connects slightly different than Oracle's SQL Developer. Oracle you click ONE connection and then all tabs you open then use that Connection. SSMS creates a completely new Connect/Instance for each window/tab I open. **I don't want this**. TWO pet peeve...
Coming from Oracle I have discovered that SSMS connects slightly different than Oracle's SQL Developer. Oracle you click ONE connection and then all tabs you open then use that Connection. SSMS creates a completely new Connect/Instance for each window/tab I open. **I don't want this**. TWO pet peeves come to mind with this behavior I am trying to get around. >1. View affect of Script before COMMIT in another Window I want to be able to run a script in one window... and in another Window have the ability to Query the Non-Committed data to simply view the results... Currently I have to use the SAME Window I ran the script in order to simply VIEW the affect. (Commit/Rollback buttons come to mind - Oracle I miss you sometimes). >2. View data stored in #TempTable from another Window I am creating a #TempTable using a script... but want to query it's data from another Window. Currently when I open another window (Unlike Oracle's SQL Developer) it opens the window using a new connection/instance. How do I query the data in #TempTable from another window rather than the window I ran the script in? The above is just simply for organizational reasons... it's SUCH a pain to run a script... and then from that same tab have to then write in other code when I don't want to edit/add any other code to the page that contains the script I ran. If I'm doing something wrong or missing some Point please let me know.
Code Novice (275 rep)
Jan 6, 2020, 05:53 PM • Last activity: Dec 7, 2023, 08:01 PM
0 votes
2 answers
2954 views
PostgreSQL: Pgbouncer authentication
We are using authentication file mode in pgbouncer. The authentication part of Pgbouncer is so confusing. Would be helpful if anybody can throw some light on my below queries. - Are Pgbouncer connections **authenticated twice** at bouncer side and DB layer? - What's the use of authentication at Pgbo...
We are using authentication file mode in pgbouncer. The authentication part of Pgbouncer is so confusing. Would be helpful if anybody can throw some light on my below queries. - Are Pgbouncer connections **authenticated twice** at bouncer side and DB layer? - What's the use of authentication at Pgbouncer if it is using the **same credentials as database**? - Is it possible to use a **different set of user credentials** for the Pgbouncer authentication other than DB credentials itself?
goodfella (595 rep)
Nov 28, 2023, 03:14 AM • Last activity: Nov 28, 2023, 07:51 PM
0 votes
1 answers
464 views
PostgreSQL driven by SQLAlchemy - Release Savepoint Idle in Transaction
While running PostgreSQL 13.12 (occurs in several versions of PG11/PG13) using SQLAlchemy 1.3, we are occasionally hitting issues where increased concurrency leaves certain transactions (and their nested transactions) in the "Idle in Transaction" state with the `RELEASE SAVEPOINT ...` query. Looking...
While running PostgreSQL 13.12 (occurs in several versions of PG11/PG13) using SQLAlchemy 1.3, we are occasionally hitting issues where increased concurrency leaves certain transactions (and their nested transactions) in the "Idle in Transaction" state with the RELEASE SAVEPOINT ... query. Looking at the currently running queries, it is not clear why transactions have stopped moving forward. I have also observed this behavior without any hanging locks: | pid | datname | duration | query | state | application_name | wait_event_type | | --- | ------- | -------- | ----- | ----- | ---------------- | --------------- | | 27662 | app | 22:22:37.429569 | select pg_advisory_xact_lock(resource_id) from resource where uuid = '018afcac-a5ab-7a5c-9eb4-2d8b0be4b556' | active | http.api.1 | Lock | | 25830 | app | 22:22:29.236398 | RELEASE SAVEPOINT sa_savepoint_5 | idle in transaction | http.api.0 | Client | | 21490 | app | 22:22:29.015862 | select pg_advisory_xact_lock(resource_id)
from resource
where uuid = '018afcac-a5ab-7a5c-9eb4-2d8b0be4b556'
| active | http.api.0 | Lock | | 27674 | app | 22:22:27.780581 | RELEASE SAVEPOINT sa_savepoint_3 | idle in transaction | http.api.2 | Client | | 29120 | app | 22:22:26.053851 | select pg_advisory_xact_lock(resource_id)
from resource
where uuid = '018afcac-a5ab-7a5c-9eb4-2d8b0be4b556'
| active | http.api.2 | Lock | Any way to debug this? This API call generally works, with the session being cleaned up and does not fail consistently. We are using SQL Alchemy's connection pooling to manage the connections - closed sessions will return the connection to the pool and issue a rollback to clean up the connection (when commit should have committed all other statements).
Justin Lowen (68 rep)
Oct 8, 2023, 05:16 PM • Last activity: Oct 9, 2023, 09:40 AM
2 votes
1 answers
67 views
Is calling a database class on app creation correct for pooling?
Start doadmin.Database().__init__() Class class Database(object): connection = None cursor = None def get_c(self): return self.cursor def __init__(self): if Database.connection is None: try: Database.connection = mysql.connector.connect(host="stuff", user="stuff", password="stuff", database="stuff",...
Start doadmin.Database().__init__() Class class Database(object): connection = None cursor = None def get_c(self): return self.cursor def __init__(self): if Database.connection is None: try: Database.connection = mysql.connector.connect(host="stuff", user="stuff", password="stuff", database="stuff", port=stuff) Database.cursor = Database.connection.cursor() except Exception as error: print("Error: Connection not established {}".format(error)) else: print("Connection established") self.connection = Database.connection self.cursor = Database.cursor Request cursor1 = Database().get_c() cursor1.execute("SELECT * FROM posts") posts = list(cursor1)
Corie LeClair (25 rep)
Jul 28, 2023, 06:30 AM • Last activity: Jul 28, 2023, 12:32 PM
0 votes
2 answers
1562 views
Should I re-use a database connection or create new collections while closing old ones? social media
this is for a social media with constant requests for new posts, checking is someone is following, getting comments, ect. i wont specify which language as it doesnt really matter but, currently in my code i - make connection - get/do what i want - close and then repeat instead should I use a system...
this is for a social media with constant requests for new posts, checking is someone is following, getting comments, ect. i wont specify which language as it doesnt really matter but, currently in my code i - make connection - get/do what i want - close and then repeat instead should I use a system where i make a connection, and then do thinsg from that while sometimes running an update? thanks!
Corie LeClair (25 rep)
Jul 13, 2023, 06:06 PM • Last activity: Jul 13, 2023, 07:58 PM
7 votes
2 answers
2284 views
Verifying connection pooling server side with CONTEXT_INFO on
I'm working with a 3 tiered application, Microsoft Dynamics AX, where the middle tier maintains connections to a SQL Server. Several clients connect to this middle tier server. The middle tier server typically has several connections open to the SQL Server, so I'm pretty sure they are being pooled,...
I'm working with a 3 tiered application, Microsoft Dynamics AX, where the middle tier maintains connections to a SQL Server. Several clients connect to this middle tier server. The middle tier server typically has several connections open to the SQL Server, so I'm pretty sure they are being pooled, however there is no documentation available as to how this is implemented. Normally we cannot relate SPIDs to users or client applications, but there is an option where we can set a registry key (specific to Microsoft Dynamics AX) which makes this information available in the context_info field of sys.dm_exec_sessions. Again, there is no documentation about how this is implemented. The only information we have on this is a vague blog entry on MSDN. The post mentions > Adding this information has a small performance overhead. So as we don't know any of the implementation details such as: 1. Is the information somehow included in the connection string or is this done by SET CONTEXT_INFO? 1. When are connections reused? 1. What exact impact can be expected Is there any way to determine server side how the connection pooling is working and what the impact of the context_info is? **update:** Using this query from here SELECT des.program_name, des.login_name, des.host_name, -- der.database_id, COUNT(des.session_id) AS [Connections] FROM sys.dm_exec_sessions des INNER JOIN sys.dm_exec_connections DEC ON des.session_id = DEC.session_id WHERE des.is_user_process = 1 --AND des.status 'running' GROUP BY des.program_name, des.login_name, des.host_name -- ,der.database_id HAVING COUNT(des.session_id) > 2 ORDER BY COUNT(des.session_id) DESC I can see connection pooling is used.
Tom V (15752 rep)
May 12, 2015, 01:28 PM • Last activity: Mar 18, 2023, 12:00 PM
Showing page 1 of 20 total questions