Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
167 views
PostgreSQL: Database Layer Pooling v/s Application Layer pooling
I would like to have some recommendations about connection pooling in PostgreSQL. We already have `connection pooling` using `Pgbouncer` behind an `Azure LB`. The pgbouncer itself is working pretty much good and later we planning to change pooling method to `transaction`. However we recently found t...
I would like to have some recommendations about connection pooling in PostgreSQL. We already have connection pooling using Pgbouncer behind an Azure LB. The pgbouncer itself is working pretty much good and later we planning to change pooling method to transaction. However we recently found that the nodejs client lib has connection pooling using pg.pool routed to LB. Now we have two layer of pooling one at application layer and other one at the db. Is it a good practice to have multiple layer of pooling? Or just pgbouncer enough to handle this considering the transaction pooling method? What are the things to consider which layer of pooling best for our system?
goodfella (595 rep)
Mar 15, 2024, 01:57 AM • Last activity: Jul 13, 2025, 02:05 PM
1 votes
1 answers
592 views
pgpool 4.2.1 and postgres 11 streaming replication problem
I've setup a 2+2 pgpool and postgresql cluster. two pgpool nodes in active/standby configuration and a streaming replication between databases All seemed work fine until after some failover and online recovery tests took place. Suddenly I no longer able to see the "streaming" status from either "sho...
I've setup a 2+2 pgpool and postgresql cluster. two pgpool nodes in active/standby configuration and a streaming replication between databases All seemed work fine until after some failover and online recovery tests took place. Suddenly I no longer able to see the "streaming" status from either "show pool_nodes" or "pcp_node_info commands. all users have the pg_monitor role granted. I'm unable to understand why I was able to see the "streaming" ( and async) repliction status , but not any more . Could someone help me to understand why? postgres=# show pool_nodes; postgres=# show pool_nodes; node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change ---------+----------+------+--------+-----------+---------+------------+-------------------+-------------------+-------------------+------------------------+--------------------- 0 | xxxxxxx | 5432 | up | 0.500000 | primary | 0 | false | 0 | | | 2021-03-16 01:37:19 1 | yyyyyy | 5432 | up | 0.500000 | standby | 0 | true | 0 | | | 2021-03-16 01:37:19 (2 rows) or from the pcp_node_info command [postgres@xxxxxxx ~]$ pcp_node_info --verbose -h localhost 0 Password: Hostname : xxxxxxx Port : 5432 Status : 3 Weight : 0.500000 Status Name : down Role : standby Replication Delay : 0 Replication State : <<<=== it is empty Replication Sync State : <<<=== it is empty Last Status Change : 2021-03-16 00:02:59 [postgres@yyyyyyyy ~]$ pcp_node_info --verbose -h localhost 1 Password: Hostname : yyyyyyyy Port : 5432 Status : 1 Weight : 0.500000 Status Name : waiting Role : primary Replication Delay : 0 Replication State : Replication Sync State :
ilputto (11 rep)
Mar 16, 2021, 01:40 AM • Last activity: Jul 6, 2025, 07:03 AM
0 votes
1 answers
208 views
Password authentication with Pgpool
I'm trying to figure out if I'm misunderstanding Pgpool or configuring it incorrectly. I have a database that various users connect to. I want to set up Pgpool to act as a load balancer, and perhaps also to perform failover. I want the database to handle authentication, but logins need to be encrypt...
I'm trying to figure out if I'm misunderstanding Pgpool or configuring it incorrectly. I have a database that various users connect to. I want to set up Pgpool to act as a load balancer, and perhaps also to perform failover. I want the database to handle authentication, but logins need to be encrypted running over the network. I've tried various configurations and I can get authentication to work by setting the password in the pool_passwd file, but if I then log into the database and change my password, even if I connect via Pgpool, the pool_passwd file does not update. Can Pgpool not be set up to encrypt data, but act as a passthrough to the DB for authentication?
Adam Blomeke (1 rep)
Jan 14, 2025, 08:31 PM • Last activity: Jun 27, 2025, 10:04 PM
2 votes
1 answers
3540 views
Automatic recovery of the failed postgresql master node is not working with pgpool II
I am new to Postgresql and Pgpool II setup. I have configured the Postgresql HA/Load balancing using Pgpool II and Repmgr. I have followed the [link][1] to do the setup. The setup consist of 3 nodes and verison of Application and OS is as mentioned below: **OS version** => CentOS 6.8 (On all the 3 n...
I am new to Postgresql and Pgpool II setup. I have configured the Postgresql HA/Load balancing using Pgpool II and Repmgr. I have followed the link to do the setup. The setup consist of 3 nodes and verison of Application and OS is as mentioned below: **OS version** => CentOS 6.8 (On all the 3 nodes) **Pgpool node** => 192.168.0.4 **Postgresql Nodes**: **node1** (Master in read-write mode) => 192.168.0.6 **node2** (Standby node in read only mode) => 192.168.0.7 **Pgpool II version** => pgpool-II version 3.5.0 (ekieboshi). **Postgresql Version** => PostgreSQL 9.4.8 **Repmgr Version** => repmgr 3.1.3 (PostgreSQL 9.4.8) **I have configured the Pgpool in Master-Slave mode using Streaming replication.** The setup is as shown in the below image: enter image description here When I bring down the Master node(192.168.0.6), the automatic failover happens successfully and the Slave node(192.168.0.7) becomes the new Master node. After failover, I have to recover the failed node(192.168.0.6) manually and sync it with the new Master node. Then register the node(192.168.0.6) as a new Standby node. I want to automate the Recovery process of the failed node and add it to the cluster back. The **pgpool.conf** file on the Pgpool node(192.168.0.4) contains parameter **recovery_1st_stage_command**. I have set the parameter **recovery_1st_stage_command = 'basebackup.sh'**. I have placed the script 'basebackup.sh' file on **both** the Postgresql nodes(192.168.0.6, 192.168.0.7) under the data directory **'/var/lib/pgsql/9.4/data'**. Also I have placed the script **'pgpool_remote_start'** on both the Postgresql nodes(192.168.0.6, 192.168.0.7) under the directory '/var/lib/pgsql/9.4/data'. Also created the pgpool extension **"pgpool_recovery and pgpool_adm"** on both the database node. After the failover is completed, the 'basebackup.sh' is not executed automatically. I have to run the command **'pcp_recovey_node'** manually on the **Pgpool node(192.168.0.4)** to recover the failed node(192.168.0.6). How can I automate the execution of **pcp_recovery_node** command on the Pgpool node with out any manual intervention. Scripts used by me as follows: basebackup.sh script ------------- #!/bin/bash # first stage recovery # $1 datadir # $2 desthost # $3 destdir #as I'm using repmgr it's not necessary for me to know datadir(master) $1 RECOVERY_NODE=$2 CLUSTER_PATH=$3 #repmgr needs to know the master's ip MASTERNODE=/sbin/ifconfig eth0 | grep inet | awk '{print $2}' | sed 's/addr://' cmd1=ssh postgres@$RECOVERY_NODE "repmgr -D $CLUSTER_PATH --force standby clone $MASTERNODE" echo $cmd1 ## pgpool_remote_start script #! /bin/sh if [ $# -ne 2 ] then echo "pgpool_remote_start remote_host remote_datadir" exit 1 fi DEST=$1 DESTDIR=$2 PGCTL=/usr/pgsql-9.4/bin/pg_ctl ssh -T $DEST $PGCTL -w -D $DESTDIR start 2>/dev/null 1>/dev/null /dev/null 1>/dev/null &1 | tee -a /tmp/pgpool_failover.log Help me with the procedure to automate the recovery of the failed node. Also let me know, for failover is it compulsory to use repmgr or we can do it without repmgr. Also specify any other method for failover without using Repmgr, its advantages and disadvantages over Repmgr.
yravi104 (21 rep)
Sep 8, 2016, 04:51 PM • Last activity: Jun 4, 2025, 03:08 AM
5 votes
1 answers
1757 views
Poor performance in my pgpool cluster
I've configured a postgres cluster with pgpool2 using the following [link][1]. The replication seems to be ok, but I ran some benchmarks (pg_bench) and the performance is lower than a single node, for example: > pgbench -c 16 -j 16 -T 600 -S bench2 -h "ONE SINGLE POSTGRES NODE" -p > 5432 number of t...
I've configured a postgres cluster with pgpool2 using the following link . The replication seems to be ok, but I ran some benchmarks (pg_bench) and the performance is lower than a single node, for example: > pgbench -c 16 -j 16 -T 600 -S bench2 -h "ONE SINGLE POSTGRES NODE" -p > 5432 number of transactions actually processed: 7752147 tps = > 12920.095988 (including connections establishing) tps = 12921.661448 (excluding connections establishing) > > pgbench -c 16 -j 16 -T 600 -S bench2 -h "PGPOOLNODE" -p 5432 number of > transactions actually processed: 389800 tps = 648.857810 (including > connections establishing) tps = 648.886713 (excluding connections > establishing) > > pgbench -c 16 -j 16 -T 600 -S bench2 -h "MASTERNODE"-p 5432 number of > transactions actually processed: 7093473 tps = 11822.379159 (including > connections establishing) tps = 11823.337051 (excluding connections > establishing) Each node is configured default. All nodes are identical virtaul machines and over the same network. This is my pgpool.conf: listen_addresses = '*' port = 5432 socket_dir = '/var/run/postgresql' pcp_port = 9898 pcp_socket_dir = '/var/run/postgresql' backend_hostname0 = '192.168.1.177' backend_port0 = 5432 backend_weight0 = 0 backend_data_directory0 = '/var/lib/postgresql/9.1/main' backend_flag0 = 'ALLOW_TO_FAILOVER' backend_hostname1 = '192.168.1.175' backend_port1 = 5432 backend_weight1 = 1 backend_data_directory1 = '/var/lib/postgresql/9.1/main' backend_flag1 = 'ALLOW_TO_FAILOVER' backend_hostname2 = '192.168.1.176' backend_port2 = 5432 backend_weight2 = 1 backend_data_directory2 = '/var/lib/postgresql/9.1/main' backend_flag2 = 'ALLOW_TO_FAILOVER' enable_pool_hba = off authentication_timeout = 60 ssl = off num_init_children = 32 max_pool = 4 child_life_time = 300 child_max_connections = 0 connection_life_time = 0 client_idle_limit = 0 log_destination = 'stderr' print_timestamp = on log_connections = on log_hostname = off log_statement = on log_per_node_statement = on log_standby_delay = 'none' syslog_facility = 'LOCAL0' syslog_ident = 'pgpool' debug_level = 0 pid_file_name = '/var/run/postgresql/pgpool.pid' logdir = '/var/log/postgresql' connection_cache = on reset_query_list = 'ABORT; DISCARD ALL' replication_mode = off replicate_select = off insert_lock = on lobj_lock_table = '' replication_stop_on_mismatch = off failover_if_affected_tuples_mismatch = off load_balance_mode = on ignore_leading_white_space = on white_function_list = '' black_function_list = 'nextval,setval' master_slave_mode = on master_slave_sub_mode = 'stream' sr_check_period = 0 sr_check_user = 'ZZZ' sr_check_password = 'YYY' delay_threshold = 0 follow_master_command = '' parallel_mode = off enable_query_cache = off pgpool2_hostname = '' system_db_hostname = 'localhost' system_db_port = 5432 system_db_dbname = 'pgpool' system_db_schema = 'pgpool_catalog' system_db_user = 'pgpool' system_db_password = 'zzz' health_check_period = 0 health_check_timeout = 20 health_check_user = 'pgpool' health_check_password = 'zzz' failover_command = '/var/lib/postgresql/bin/failover.sh %d %M %m' failback_command = '' fail_over_on_backend_error = on recovery_user = 'nobody' recovery_password = '' recovery_1st_stage_command = '' recovery_2nd_stage_command = '' recovery_timeout = 90 client_idle_limit_in_recovery = 0 relcache_expire = 0 Any suggestion? Thanks
Sergio Belmar Argudo (51 rep)
Feb 27, 2014, 12:42 PM • Last activity: Jun 1, 2025, 03:05 PM
2 votes
2 answers
131 views
Can pgpool handle read queries while the primary is down?
I am trying to configure pgpool to distribute the traffic over two Postgres clusters. One cluster has the primary node, and the other is a replica cluster with a designated primary node that follows the primary (streaming replication). The two clusters are managed by CloudNativePG. In my setup, when...
I am trying to configure pgpool to distribute the traffic over two Postgres clusters. One cluster has the primary node, and the other is a replica cluster with a designated primary node that follows the primary (streaming replication). The two clusters are managed by CloudNativePG. In my setup, when the primary cluster goes down, promoting the replica cluster is possible only through manual intervention (by modifying some Kubernetes resources). And accordingly, I do not want or expect pgpool to attempt promoting a replica. What I am trying to get pgpool to do is to keep handling the read-only traffic while the primary is down, and I am not sure if that is supported. This is my pgpool configuration.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pgpool
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pgpool
  template:
    metadata:
      labels:
        app: pgpool
    spec:
      containers:
      - name: pgpool
        image: pgpool/pgpool
        env:
        - name: PGPOOL_PARAMS_FAILOVER_COMMAND
          value: ""

        - name: PGPOOL_PARAMS_REPLICATE_SELECT
          value: "on"

        - name: PGPOOL_PARAMS_BACKEND_CLUSTERING_MODE
          value: streaming_replication

        - name: PGPOOL_PARAMS_LOAD_BALANCE_MODE
          value: "on"

        - name: PGPOOL_PARAMS_FAILOVER_ON_BACKEND_ERROR
          value: "off"

        - name: PGPOOL_PARAMS_FAILOVER_ON_BACKEND_SHUTDOWN
          value: "off"

        - name: PGPOOL_PARAMS_BACKEND_HOSTNAME0
          value: "postgres-site1-rw"
        - name: PGPOOL_PARAMS_BACKEND_PORT0
          value: "5432"
        - name: PGPOOL_PARAMS_BACKEND_WEIGHT0
          value: "0.5"
        - name: PGPOOL_PARAMS_BACKEND_FLAG0
          value: "DISALLOW_TO_FAILOVER|ALWAYS_PRIMARY"

        - name: PGPOOL_PARAMS_BACKEND_HOSTNAME1
          value: "postgres-site2-ro"
        - name: PGPOOL_PARAMS_BACKEND_PORT1
          value: "5432"
        - name: PGPOOL_PARAMS_BACKEND_WEIGHT1
          value: "0.5"
        - name: PGPOOL_PARAMS_BACKEND_FLAG1
          value: "DISALLOW_TO_FAILOVER"

        - name: PGPOOL_PARAMS_LOG_CLIENT_MESSAGES
          value: "on"

        - name: PGPOOL_PARAMS_CLIENT_MIN_MESSAGES
          value: "DEBUG5"

        - name: PGPOOL_PARAMS_LOG_MIN_MESSAGES
          value: "DEBUG5"

        - name: PGPOOL_PARAMS_LOG_PER_NODE_STATEMENT
          value: "on"

        - name: PGPOOL_PARAMS_LOG_CONNECTIONS
          value: "on"

        - name: PGPOOL_PARAMS_LOG_DISCONNECTIONS
          value: "on"

        - name: PGPOOL_PARAMS_LOG_ERROR_VERBOSITY
          value: "VERBOSE"

        - name: PGPOOL_PARAMS_ENABLE_POOL_HBA
          value: "on"

        - name: PGPOOL_PARAMS_SSL
          value: "off"

        - name: POSTGRES_USERNAME
          valueFrom:
            secretKeyRef:
              name: postgres-superuser-secret
              key: username

        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-superuser-secret
              key: password

        - name: PGPOOL_PASSWORD_ENCRYPTION_METHOD
          value: "scram-sha-256"

        - name: PGPOOL_ENABLE_POOL_PASSWD
          value: "true"

        - name: PGPOOL_SKIP_PASSWORD_ENCRYPTION
          value: "false"
Now when I bring the primary down and try to send a simple read request, I get this error.
$ kubectl exec -it  -- psql -p 9999 -h localhost -U postgres -c "SELECT username FROM test;"
Password for user postgres:
psql: error: connection to server at "localhost" (::1), port 9999 failed: FATAL:  failed to create a backend 0 connection
DETAIL:  not executing failover because failover_on_backend_error is off
command terminated with exit code 2
This makes me wonder, **can pgpool handle read queries while no primary is available?**
helmy (41 rep)
Mar 19, 2025, 02:36 PM • Last activity: Mar 20, 2025, 05:05 PM
1 votes
2 answers
5947 views
How to fix authentication through pgpool?
I have a cluster that consists of: 1) pgpool 3) master 3) slave Using pgpool-II version 4.3.2 and Postgres 14.4 Several users have been created, all are authenticated through pgpool. There was a need to create new users. In `pool_passwd` I create entries in this way: pg_enc -m -f /etc/pgpool2/pgpool...
I have a cluster that consists of: 1) pgpool 3) master 3) slave Using pgpool-II version 4.3.2 and Postgres 14.4 Several users have been created, all are authenticated through pgpool. There was a need to create new users. In pool_passwd I create entries in this way: pg_enc -m -f /etc/pgpool2/pgpool.conf -i users.txt (in users.txt there are user logins and passwords) When connecting with Dbeaver > ERROR: backend authentication failed > Details: backend response with > kind 'E' when expecting 'R'
pg_md5 -m --config-file="/etc/pgpool2/pgpool.conf" -u "user" "pass"
> ERROR: failed to authenticate with backend using SCRAM > Details: valid password not found pool_hba.conf:
host    all         all         10.99.0.0/32          scram-sha-256
host    all         postgres    0.0.0.0/0             scram-sha-256

local   all         all                               md5
local   all         postgres                          scram-sha-256

host    all         all         127.0.0.1/32          md5
host    all         all         ::1/128               md5
host    all         all         0.0.0.0/0             md5

local all all    trust
host  all all    10.99.0.0/32 trust
What could be the problem?
Magi (141 rep)
Jul 12, 2022, 11:10 AM • Last activity: Feb 17, 2025, 09:06 AM
0 votes
1 answers
1143 views
pcp_attach_node not working in pgpool 2
I've been working on pgpool2 setup with 2 DB node and I need to attach on the node, which os currently down. when I run `postgres@pg-pool:~$ pcp_attach_node -p 9898 -n 1 -U postgres` I get following error. > pcp_attach_node -p 9898 -n 1 -U postgres pcp_attach_node: error while > loading shared libra...
I've been working on pgpool2 setup with 2 DB node and I need to attach on the node, which os currently down. when I run postgres@pg-pool:~$ pcp_attach_node -p 9898 -n 1 -U postgres I get following error. > pcp_attach_node -p 9898 -n 1 -U postgres pcp_attach_node: error while > loading shared libraries: libpcp.so.1: cannot open shared object file: > No such file or directory I am using: - pgpool-II-4.0.1 - ubuntu 18.04 LTS - postgresql 10 **I have installed:** postgresql-contrib gcc make libpq-dev **pgpool installation:** ./configure make make install **Contents of /usr/local/lib** -rwxr-xr-x 1 root root 959 Mar 18 07:55 libpcp.la* lrwxrwxrwx 1 root root 15 Mar 18 07:55 libpcp.so -> libpcp.so.1.0.0* lrwxrwxrwx 1 root root 15 Mar 18 07:55 libpcp.so.1 -> libpcp.so.1.0.0* -rwxr-xr-x 1 root root 244296 Mar 18 07:55 libpcp.so.1.0.0* Your help would be great
Umanda (111 rep)
Apr 7, 2020, 07:00 PM • Last activity: Feb 14, 2025, 06:04 PM
0 votes
1 answers
708 views
support of multiple databases on different hosts by pgpool2
Let's assume We have 2 databases: Database1 is located on host1 and host2 Database2 is located on host3 and host4 If I'll use pgpool2 to connect to these 2 databases with single pgpool config for backend configuration: backend_hostname0 = 'host1' backend_port0 = 5432 backend_weight0 = 1 backend_host...
Let's assume We have 2 databases: Database1 is located on host1 and host2 Database2 is located on host3 and host4 If I'll use pgpool2 to connect to these 2 databases with single pgpool config for backend configuration: backend_hostname0 = 'host1' backend_port0 = 5432 backend_weight0 = 1 backend_hostname1 = 'host2' backend_port1 = 5433 backend_weight1 = 1 backend_hostname2 = 'host3' backend_port2 = 5434 backend_weight2 = 1 backend_hostname3 = 'host4' backend_port3 = 5435 backend_weight3 = 1 With this kind of configuration, will pgpool2 throw an error in case of trying to connect to Database2 for example which is absent on host1 and host2 or it will determine easily necessary hosts which contain appropriate database with error and coordinate client in appropriate way to connect to it? Why I am asking this, because I don't see that pgpool supports something like conf.d directory where I can put separately hosts with different databases in order to connect to them via clients. Instead I see support only of single config file which is **pgpool.conf** for backend definition.
user54 (135 rep)
Dec 9, 2020, 08:49 PM • Last activity: Feb 12, 2025, 10:02 PM
0 votes
0 answers
54 views
pcp_attach_node: username and/or password does not match for user "postgres"
I am setting up pgpool for the first time, using a two-node cluster. The primary node is on 10.6.1.129 and the secondary node is on .131. ``` backend_hostname0 = '10.6.1.129' backend_port0 = 5433 backend_weight0 = 1 backend_data_directory0 = '/opt/data/data15_new' backend_flag0 = 'ALLOW_TO_FAILOVER'...
I am setting up pgpool for the first time, using a two-node cluster. The primary node is on 10.6.1.129 and the secondary node is on .131.
backend_hostname0 = '10.6.1.129'
backend_port0 = 5433
backend_weight0 = 1
backend_data_directory0 = '/opt/data/data15_new'
backend_flag0 = 'ALLOW_TO_FAILOVER'
backend_hostname1 = '10.6.1.131'
backend_port1 = 5433
backend_weight1 = 1
backend_data_directory1 = '/opt/data/data15_new'
backend_flag1 = 'ALLOW_TO_FAILOVER'
I have load balancing working effectively. When I bring the secondary node down it fails over successfully to the primary. When I bring the secondary back up, the failback doesn't work. The logs give this:
main pid 411463: LOG:  reaper handler
main pid 411463: LOG:  reaper handler: exiting normally
child pid 411463: LOG:  failover or failback event detected
child pid 411463: DETAIL:  restarting myself
main pid 411463: LOG:  reaper handler
main pid 411463: LOG:  reaper handler: exiting normally
child pid 411463: LOG:  failover or failback event detected
child pid 411463: DETAIL:  restarting myself
main pid 411463: LOG:  reaper handler
main pid 411463: LOG:  reaper handler: exiting normally
child pid 411463: LOG:  failover or failback event detected
child pid 411463: DETAIL:  restarting myself
main pid 411463: LOG:  reaper handler
main pid 411463: LOG:  reaper handler: exiting normally
child pid 411463: LOG:  failover or failback event detected
child pid 411463: DETAIL:  restarting myself
This group appears multiple times, but if I look at the pgpool_status file, node 2 is still down:
up
down
Doing some further investigation I tried to manually add the node back. In theory since the DB was only down for a few seconds and I believe I've found the issue:
$pcp_attach_node -h 127.0.0.1 -n 1 -v
FATAL: authentication failed for user "postgres"
DETAIL: username and/or password does not match
I reset the password for postgres and put the password in my .pgpass file, but the error persists. Is there another place that I should put the postgres password to get pcp to recognize it?
Adam Blomeke (1 rep)
Jan 24, 2025, 03:32 PM • Last activity: Jan 24, 2025, 03:33 PM
0 votes
0 answers
47 views
Issues with High Write Transactions in PostgreSQL Cluster with Pgpool Load Balancer
I am currently managing a 3-node PostgreSQL asynchronous streaming replication cluster setup, which includes one master node and two standby nodes, all sitting behind a pgpool load balancer. During high write transactions (approximately 90% write and 10% read), I'm encountering issues that cause my...
I am currently managing a 3-node PostgreSQL asynchronous streaming replication cluster setup, which includes one master node and two standby nodes, all sitting behind a pgpool load balancer. During high write transactions (approximately 90% write and 10% read), I'm encountering issues that cause my application to get stuck and generate errors. ### Issue: - **Scenario 1:** When I direct both read and write traffic solely to the primary node, everything functions correctly, and the application runs smoothly. - **Scenario 2:** When I distribute the load across the standby nodes (writes to the primary, reads to the standbys), I start seeing a failure rate of about 3% to 5%, causing the application logs to get stuck with the following error:
java.lang.RuntimeException: org.postgresql.util.PSQLException: The connection attempt failed.
Caused by: org.postgresql.util.PSQLException: The connection attempt failed
20230402 184393.543 Master-slave integrity test hit SQL problem: Database NOT ok! org.postgresql.util.PSQLException I/O error occurred while sending to the backend.
WARNING: Failed to check connection: java.net.ConnectException: Connection refused(Connection refused)
- Despite these issues, the CPU, RAM, and load average metrics on all database nodes appear normal.However, the load balancer logs consistently show the following error in both scenarios (primary-only and primary+standby):
SQLUtilsMDS/SAS pid 6354333: ERROR: unable to read data from frontend.
### Question: What could be causing these connection issues when distributing the read load across the standby nodes? Could this be an application-side issue, or is there something I need to tweak in my pgpool configuration or PostgreSQL settings to handle the high write transaction load better? Any insights or suggestions would be greatly appreciated.
Sheikh Wasiu Al Hasib (283 rep)
Aug 23, 2024, 07:11 PM
0 votes
1 answers
1959 views
PgPool-II can't find the servers where the databases live (even though the PgPool is on a db server)
`PgPool-II 4.3.3` `Postgresql 13.9` `RHEL 8.6` Postgresql is installed on FOOBARa and FOOBARb, and data is successfully being stream replicated from `a` to `b`. Then I installed PgPool-II on `a`, and configured the two backend_host configuration variables to point to the two db servers. However, pgp...
PgPool-II 4.3.3 Postgresql 13.9 RHEL 8.6 Postgresql is installed on FOOBARa and FOOBARb, and data is successfully being stream replicated from a to b. Then I installed PgPool-II on a, and configured the two backend_host configuration variables to point to the two db servers. However, pgpool throws getaddrinfo() failed with error "Name or service not known" on **both** it's own node and the remote node, even though the two hostname commands (and ssh) work perfectly. What am I misunderstanding? $ grep backend_hostname pgpool.conf backend_hostname0 = 'FOOBARa' backend_hostname1 = 'FOOBARb' $ hostname FOOBARa $ ssh -q FOOBARb hostname FOOBARb 2023-01-25 12:07:45.156: child pid 786204: DEBUG: initializing backend status 2023-01-25 12:07:45.157: main pid 786138: WARNING: failed to connect to PostgreSQL server, getaddrinfo() failed with error "Name or service not known" 2023-01-25 12:07:45.157: main pid 786138: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 0 2023-01-25 12:07:45.165: main pid 786138: DEBUG: authenticate kind = 10 2023-01-25 12:07:45.172: main pid 786138: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 1 2023-01-25 12:07:45.172: main pid 786138: DEBUG: pool_acquire_follow_primary_lock: lock was not held by anyone 2023-01-25 12:07:45.172: main pid 786138: DEBUG: pool_acquire_follow_primary_lock: succeeded in acquiring lock 2023-01-25 12:07:45.172: main pid 786138: DEBUG: verify_backend_node_status: there's no primary node 2023-01-25 12:07:45.172: main pid 786138: DEBUG: pool_release_follow_primary_lock called
RonJohn (694 rep)
Jan 26, 2023, 01:48 AM • Last activity: Jun 8, 2024, 11:32 AM
1 votes
1 answers
4225 views
Is it good practice to use both PGBouncer and PGPool together?
We have setup a streaming replication on Postgres 13 and it all works fine. Current setup is as follows. We have one Primary Postgres and one Secondary Postgres connected via streaming replication. This replication and cluster management of failover is managed by Patroni. We have PGBouncer for conne...
We have setup a streaming replication on Postgres 13 and it all works fine. Current setup is as follows. We have one Primary Postgres and one Secondary Postgres connected via streaming replication. This replication and cluster management of failover is managed by Patroni. We have PGBouncer for connection pooling and it is currently connected to PG Master. This works fine without issues. PgBouncer-to-PG We wanted to use the secondary postgres for read purposes to reduce the read load on Primary. So we wanted to introduce PGPool in the eco system to get benefit of read/write splits. Couple of articles suggested that we could use both PGBouncer and PgPool together to get better results from both tools (https://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/) . So now our setup adds PgPool-II between PgBouncer and PG with connection pooling disabled. enter image description here This setup also works fine. However I am not able to understand about connection pooling. I could see many connections in pgpool when i do SHOW POOL_POOLS; - Who is holding on to the connections for connection pooling ? PgBouncer/PgPool ? - Why I see connections when I issue command show pool_pools in pgpool when connection pooling is disabled. - Is this setup right and scalable? What configurations I need to change to get better of both tools ?
Pramod (111 rep)
Oct 26, 2021, 10:19 AM • Last activity: Apr 30, 2024, 09:16 AM
1 votes
0 answers
97 views
Why is the replication delay in pgpool showing a very large number?
To calculate replication delay in pgpool, the value is obtained from the view **pg_stat_replication.replay_lag**. However, sometimes when executing **show pool_nodes**, it displays a significantly larger number. In the **pg_stat_replication** view, the **replay_lag** is represented as values like **...
To calculate replication delay in pgpool, the value is obtained from the view **pg_stat_replication.replay_lag**. However, sometimes when executing **show pool_nodes**, it displays a significantly larger number. In the **pg_stat_replication** view, the **replay_lag** is represented as values like **00:00:00.778134**. How **repliation_delay** calculated at **pgpool**? I suspect it is converted into microseconds. I am looking for expert opinion.
node_id  |   hostname   | port | status | lb_weight |  role   | select_cnt  | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change
---------+--------------+------+--------+-----------+---------+-------------+-------------------+-------------------+-------------------+------------------------+---------------------
 0       | 		db1		| 5678 | up     | 0.200000  | primary | 12401123776 | false             | 0                 |                   |                        | 2023-05-28 00:45:31
 1       | 		db2		| 5678 | up     | 0.200000  | standby | 11909003735 | false             | 0                 | streaming         | async                  | 2023-05-28 00:45:31
 2       | 		db3 	| 5678 | up     | 0.200000  | standby | 11930293446 | true              | 61768             | streaming         | async                  | 2023-05-28 00:45:31
 3       | 		db4 	| 5678 | up     | 0.200000  | standby | 11829890748 | false             | 776136            | streaming         | async                  | 2023-05-28 00:45:31
 4       | 		db5 	| 5678 | up     | 0.200000  | standby | 11898267765 | false             | 124128            | streaming         | async                  | 2023-05-28 00:45:31
Sheikh Wasiu Al Hasib (283 rep)
Apr 27, 2024, 07:06 AM • Last activity: Apr 27, 2024, 05:58 PM
-1 votes
1 answers
121 views
pgpool-II connection pooling (only) but for long running transactions, can it even be done?
I have a job queue management system where workers fetch a job from a queue and maintain that job record locked, they use save-points to make other database changes and, on error, roll back to that save-point still holding on to the locked job record. Then finally update the job result/status and co...
I have a job queue management system where workers fetch a job from a queue and maintain that job record locked, they use save-points to make other database changes and, on error, roll back to that save-point still holding on to the locked job record. Then finally update the job result/status and commit. Since I sometimes need hundreds of workers, I am running into the problem of PgSQL database connection limits (I have 100 configured right now, but that scratches the limits of memory, etc.) So I thought to use pgpool-II to have more connections. Until I considered the fact that they all hold onto a locked record, or any transaction. And so the problem/question is, would pgpool-II even help here or would it not keep the connection locked while a transaction is running? In that case I would have to change my logic, not to lock the job records but update them. Right? Any other idea? Why does PgSQL require so much memory for each possible connection anyway, if my connections actually don't do very much most of the time?
Gunther Schadow (523 rep)
Mar 24, 2024, 11:47 PM • Last activity: Mar 25, 2024, 03:40 PM
0 votes
1 answers
102 views
PGPOOL Load Balancing: Create Sequence is going to replica
I created a pgpool instance which can connect to one master database(primary for write) and one read replica. I have enabled below parameter: ``` disable_load_balance_on_write='dml_adaptive'; ``` Now my below command fails when connect to pgpool : ``` create sequence if not exists public.xyz; ERROR:...
I created a pgpool instance which can connect to one master database(primary for write) and one read replica. I have enabled below parameter:
disable_load_balance_on_write='dml_adaptive';
Now my below command fails when connect to pgpool :
create sequence if not exists public.xyz;
ERROR: cannot execute CREATE SEQUENCE in a read-only transaction
I am surprised "CREATE" statement is getting load balanced to replica. Anybody can throw some light here please?
Sajith P Shetty (312 rep)
Aug 4, 2023, 11:37 AM • Last activity: Aug 4, 2023, 03:58 PM
2 votes
2 answers
11136 views
Pgpool install - libpq is not installed or libpq is old
Based on the documentation [here][1], I want to create a pgpool II with PostgreSQL databases. When I try to install, I encounter this error: configure: error: libpq is not installed or libpq is old Searching the pgpool website, they mention to run `./configure ` with this command: --with-pgsql OR --...
Based on the documentation here , I want to create a pgpool II with PostgreSQL databases. When I try to install, I encounter this error: configure: error: libpq is not installed or libpq is old Searching the pgpool website, they mention to run ./configure with this command: --with-pgsql OR --with-pgsql-includedir OR --with-pgsql-libdir But still I'm facing the same problem. Can anyone guide me what can I do from here? I am running PostgreSQL 9.2 intalled from a yum package on CentOS 5.
user119720 (105 rep)
Oct 16, 2012, 07:57 AM • Last activity: Jan 21, 2023, 10:46 AM
1 votes
0 answers
888 views
Pgpool for HA of Postgres with repmgr switchover. Pgpool haw to follow primary
I'm trying to setup a highly available postgres (13) cluster(3 servers) with streaming replication. I first tried native replication and then decided to use repmgr to manage this so I could use the switchover option. So far so good, managed to get this up and running (happy). I then needed to set up...
I'm trying to setup a highly available postgres (13) cluster(3 servers) with streaming replication. I first tried native replication and then decided to use repmgr to manage this so I could use the switchover option. So far so good, managed to get this up and running (happy). I then needed to set up the high availability part and considered haproxy pgbouncer and pgpool. I chose pgpool(4.3) but am beginning to regret this choice! After some pain I have have a working cluster of 3 dedicated pgpool machines. I wanted to be able to manually switchover (using repmgr) the primary so that I can do server maintenance without impacting the clients using the DBs. I don't want to use the pgpool automatic failover as repmgr has this covered and works. Problem is that I can't get pgpool to follow the primary when it moves. I have tried a restart of pgpool with -D but that is not working as expected and is not a very good solution. How can I force pgpool to automatically reevaluate the pg cluster and use the new primary? Checking the replicating cluster structure (primary standby standby)should be an option. Just following the primary should also be an option if the rest is to be managed outside of pgpool. The health check should be able to check for this too. (another question as it's not clear: when i have a cluster of pgpool is the status info shared between members of the cluster? )
Pete x (11 rep)
Jun 17, 2022, 06:56 AM • Last activity: Jan 19, 2023, 08:54 PM
1 votes
2 answers
5928 views
Pgpool2 doesn't use the slave for load balancing
I've enabled streaming replication in my postgres 9.5.10 installed from ubuntu xenial repo. I want to enable load balance for pgpool-2 (3.7.0 amefuriboshi) installed from sources. So I have this pgpool.conf: https://pastebin.com/qWWgejQN As you can see, I set both nodes, turned off replication, turn...
I've enabled streaming replication in my postgres 9.5.10 installed from ubuntu xenial repo. I want to enable load balance for pgpool-2 (3.7.0 amefuriboshi) installed from sources. So I have this pgpool.conf: https://pastebin.com/qWWgejQN As you can see, I set both nodes, turned off replication, turned on master/slave mode and set it to stream. And also I enabled memcached caching. Now I have very big problem that pgpool doesn't want to use slave for balancing: postgres=# show pool_nodes; node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay ---------+----------------+------+--------+-----------+---------+------------+-------------------+------------------- 0 | localhost | 5433 | up | 0.500000 | primary | 0 | true | 0 1 | host2 | 5433 | unused | 0.500000 | standby | 0 | false | 0 (2 rows) What am I do wrong?
abr_stackoverflow (204 rep)
Nov 25, 2017, 11:59 AM • Last activity: Dec 23, 2021, 01:14 PM
1 votes
0 answers
1988 views
Postgres Replication - WAL sender throw connection reset by peer
Good morning everyone, I hope containment is going well for you. For my part, I use this time to set up a Postgres 11 cluster on my servers, but I have some problems. Can you help me? I've got this error coming in a loop on the postres master. PostgreSQL Database directory appears to contain a datab...
Good morning everyone, I hope containment is going well for you. For my part, I use this time to set up a Postgres 11 cluster on my servers, but I have some problems. Can you help me? I've got this error coming in a loop on the postres master. PostgreSQL Database directory appears to contain a database; Skipping initialization 2020-05-06 08:40:21.379 UTC LOG: listening on IPv4 address "0.0.0.0", port 5432 2020-05-06 08:40:21.379 UTC LOG: listening on IPv6 address "::", port 5432 2020-05-06 08:40:21.380 UTC LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 2020-05-06 08:40:21.390 UTC LOG: database system was shut down at 2020-05-06 08:40:19 UTC 2020-05-06 08:40:21.390 UTC LOG: entering standby mode 2020-05-06 08:40:21.391 UTC LOG: consistent recovery state reached at 0/165A760 2020-05-06 08:40:21.391 UTC LOG: invalid record length at 0/165A760: wanted 24, got 0 2020-05-06 08:40:21.391 UTC LOG: trigger file found: /var/lib/postgresql/data/promote_to_master.tmp 2020-05-06 08:40:21.391 UTC LOG: redo is not required 2020-05-06 08:40:21.392 UTC LOG: database system is ready to accept read only connections 2020-05-06 08:40:21.394 UTC LOG: selected new timeline ID: 2 2020-05-06 08:40:21.416 UTC LOG: archive recovery complete 2020-05-06 08:40:21.421 UTC LOG: database system is ready to accept connections 2020-05-06 08:40:29.133 UTC ERROR: cannot execute SQL commands in WAL sender for physical replication 2020-05-06 08:40:29.134 UTC LOG: could not receive data from client: Connection reset by peer 2020-05-06 08:40:34.275 UTC ERROR: cannot execute SQL commands in WAL sender for physical replication 2020-05-06 08:40:34.275 UTC LOG: could not receive data from client: Connection reset by peer 2020-05-06 08:40:53.105 UTC ERROR: cannot execute SQL commands in WAL sender for physical replication 2020-05-06 08:40:53.106 UTC LOG: could not receive data from client: Connection reset by peer 2020-05-06 08:41:24.186 UTC ERROR: cannot execute SQL commands in WAL sender for physical replication 2020-05-06 08:41:24.186 UTC LOG: could not receive data from client: Connection reset by peer 2020-05-06 08:42:19.287 UTC ERROR: cannot execute SQL commands in WAL sender for physical replication 2020-05-06 08:42:19.287 UTC LOG: could not receive data from client: Connection reset by peer 2020-05-06 08:43:49.475 UTC ERROR: cannot execute SQL commands in WAL sender for physical replication 2020-05-06 08:43:49.475 UTC LOG: could not receive data from client: Connection reset by peer The client postgres display this one during pg_basebackup : pg_basebackup: could not send replication command "SHOW data_directory_mode": FATAL: Backend throw an error message DETAIL: Exiting current session because of an error from backend HINT: BACKEND Error: "cannot execute SQL commands in WAL sender for physical replication" server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. My cluster is set like this: postgres-0 -> 192.168.9.227 postgres-1 -> 192.168.187.162 Virtual IP for the Master : db -> 10.105.49.122 I use a pgpool instance on each postgres host that listens on port 5433 and is connected to a backend on port 5432. My configuration is done with these commands: su postgres -c initdb host_ip=$(getent ahostsv4 $HOSTNAME | cut -d ' ' -f1 | head -n 1) echo "host replication ${REPLICA_USER} ${host_ip}/32 md5" >> /var/lib/postgresql/data/pg_hba.conf REPLICATE_FROM="$(getent ahostsv4 "${MASTER_HOSTNAME}" | cut -d ' ' -f1 | head -n 1)" echo "${postgres_conf}" | envsubst >> /var/lib/postgresql/data/postgresql.conf echo "${recovery_conf}" | envsubst > /var/lib/postgresql/data/recovery.conf su postgres -c postgres & echo "CREATE USER ${REPLICA_USER} WITH REPLICATION LOGIN ENCRYPTED PASSWORD '${REPLICA_PASSWORD}';" | psql -U postgres The variable $postgres_conf is equal to : listen_addresses = '*' wal_level = hot_standby max_wal_senders = 8 hot_standby = on archive_mode = on archive_command = 'cp /var/lib/postgresql/data/%p /var/lib/postgresql/data/archive/%f' The variable $recovery_conf is equal to : standby_mode = on recovery_target_timeline = 'latest' primary_conninfo = 'host=${REPLICATE_FROM} port=5432 user=${REPLICA_USER} password=${REPLICA_PASSWORD}' trigger_file = '/var/lib/postgresql/data/promote_to_master.tmp' On the Master I do: touch /var/lib/postgresql/data/promote_to_master.tmp And on the Slave I go: PGPASSWORD=${REPLICA_PASSWORD} su postgres -c "pg_basebackup -h ${REPLICATE_FROM} -D /var/lib/postgresql/data -U ${REPLICA_USER} -v -P" Did anyone has the solution ?
Shiishii (11 rep)
May 6, 2020, 09:20 AM • Last activity: Sep 24, 2021, 12:05 PM
Showing page 1 of 20 total questions