Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
191
views
PostgreSQL doesn't use grant for a view
I have a view called `viw_insideworktime` and it makes a `SELECT` for the `worktime` table, however when trying to select the error below appears: `SELECT * FROM viw_insideworktime;` `ERROR: permission denied for table worktime` I already tried to apply grants to the user, but the error persists. Th...
I have a view called
viw_insideworktime
and it makes a SELECT
for the worktime
table, however when trying to select the error below appears:
SELECT * FROM viw_insideworktime;
ERROR: permission denied for table worktime
I already tried to apply grants to the user, but the error persists.
The grant applied was as follows:
GRANT SELECT ON TABLE viw_insideworktime TO jobwork;
GRANT SELECT ON TABLE worktime TO jobwork;
However, if I try to access the table directly, the data is displayed.
**Solved:**
I solved the problem by deleting the view and applying the GRANT below:
GRANT SELECT ON viw_insideworktime TO jobwork;
GRANT SELECT ON worktime TO jobwork;
Was there any other solution without having to delete the view?
Tom
(438 rep)
Mar 31, 2023, 07:36 PM
• Last activity: Jun 30, 2025, 09:00 PM
1
votes
0
answers
24
views
Suggestion regarding Migrating and Sharding 1TB standalone PostgreSQL Cluster with Partitions and Multiple Databases, using Citus
We're currently in the **evaluation/planning phase** evaluating Citus (open-source) as a sharding solution (multi tenant) for our existing PostgreSQL database cluster. We anticipate significant data growth with a new application feature and need to scale beyond our current single-node setup. Current...
We're currently in the **evaluation/planning phase** evaluating Citus (open-source) as a sharding solution (multi tenant) for our existing PostgreSQL database cluster. We anticipate significant data growth with a new application feature and need to scale beyond our current single-node setup.
Current Setup:
Standalone single node PostgreSQL (open source). Approximately 1 TB of data. pgpool-II is used for load balancing with a primary and one standby for basic high availability. The database structure involves multiple databases within the cluster. Each database contains multiple schemas. Within schemas, we have numerous tables, functions, and other SQL objects. Many tables currently utilize standard PostgreSQL declarative partitioning (both range and hash methods) for performance management. Goal: We aim to migrate to a Citus cluster and implement database sharding to handle the expected large volume of new data, while also effectively incorporating our existing 1 TB of historical data.
We have reviewed some of the basic Citus documentation but have several key questions regarding the migration and ongoing management of our specific setup:
1. Handling Multiple Databases: Our current structure has data spread across multiple logical databases within the single PostgreSQL instance. How should this multi-database structure be handled when migrating to Citus? Is there a recommended approach for managing multiple logical databases within a Citus cluster from a migration and ongoing data distribution perspective?
2. Existing and Future Partitioning: We heavily use standard PostgreSQL range and hash partitioning on our tables. How are these existing partitioned tables and their data migrated into a Citus distributed table setup? Does Citus automatically handle the data from partitions? How is partitioning typically handled for new and ongoing data within a Citus distributed table? Can we still use time-based partitioning, for example, effectively within Citus shards?
3. Load Balancing and High Availability (pgpool/Replication): Can our existing pgpool-II setup be repurposed or used in conjunction with a Citus cluster for load balancing client connections? What are the recommended strategies for high availability and replication within a Citus cluster itself (for both coordinator and worker nodes), and load balancing?
4. Schema Distribution to Workers: When we distribute a table (e.g., using create_distributed_table), how does Citus handle the schema definition on the worker nodes? Does the full schema structure (including other non-distributed objects or schemas) get replicated to workers, or only the necessary table definitions for the shards?
5. Monitoring in a Distributed Environment: On our single node, we rely on standard PostgreSQL system views and functions like pg_stat_activity, pg_stat_statements, pg_control_checkpoint(), pg_buffercache, pg_stat_user_tables, pg_stat_bgwriter, etc., for monitoring and performance tuning. How do these tools work in a distributed Citus cluster, where data and activity are spread across multiple worker nodes? How do we get a comprehensive, cluster-wide view of performance and activity?
We would appreciate guidance, insights, or experiences from the community on these points as we plan our migration to Citus.
**Also, please advise if there is another sharding solution that can be tried for our current setup.**
Ramzy
(11 rep)
May 16, 2025, 07:48 AM
2
votes
1
answers
289
views
random_page_cost and memoize plan relation
Postgres 15.2. I have the following simple query: SELECT mark.last_modified FROM mark INNER JOIN element ON mark.element_id = element.id INNER JOIN model ON element.model_id = model.id WHERE model.brand_id = '9cb22c38-af8a-4347-988e-1b2287122d39' ORDER BY mark.last_modified DESC LIMIT 1; and my DB i...
Postgres 15.2. I have the following simple query:
SELECT mark.last_modified FROM mark
INNER JOIN element ON mark.element_id = element.id
INNER JOIN model ON element.model_id = model.id
WHERE model.brand_id = '9cb22c38-af8a-4347-988e-1b2287122d39'
ORDER BY mark.last_modified DESC LIMIT 1;
and my DB is configured with page_random_cost = 3.66 (should be closer to 1 - I know).
I analyze the query and notice it use memoize and the following plan:
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=20338.19..30531.32 rows=1 width=8) (actual time=10588.047..10588.137 rows=0 loops=1)
Buffers: shared hit=1209004 read=8403
I/O Timings: shared/local read=8475.963
-> Nested Loop (cost=20338.19..1335251.72 rows=129 width=8) (actual time=10588.046..10588.135 rows=0 loops=1)
Buffers: shared hit=1209004 read=8403
I/O Timings: shared/local read=8475.963
-> Nested Loop (cost=20337.90..1326623.17 rows=278958 width=24) (actual time=101.060..10457.145 rows=277857 loops=1)
Buffers: shared hit=1205893 read=8403
I/O Timings: shared/local read=8475.963
-> Gather Merge (cost=20337.32..52826.58 rows=278958 width=24) (actual time=101.016..183.845 rows=277857 loops=1)
Workers Planned: 2
Workers Launched: 0
Buffers: shared hit=8396
-> Sort (cost=19337.30..19627.88 rows=116232 width=24) (actual time=100.708..147.960 rows=277857 loops=1)
Sort Key: mark.last_modified DESC
Sort Method: quicksort Memory: 29655kB
Buffers: shared hit=8396
-> Parallel Seq Scan on mark (cost=0.00..9558.33 rows=116232 width=24) (actual time=0.008..43.455 rows=277857 loops=1)
Buffers: shared hit=8396
-> Memoize (cost=0.57..5.24 rows=1 width=32) (actual time=0.037..0.037 rows=1 loops=277857)
Cache Key: mark.element_id
Cache Mode: logical
Hits: 36677 Misses: 241180 Evictions: 0 Overflows: 0 Memory Usage: 33916kB
Buffers: shared hit=1197497 read=8403
I/O Timings: shared/local read=8475.963
-> Index Scan using activity_pkey on element (cost=0.56..5.23 rows=1 width=32) (actual time=0.041..0.041 rows=1 loops=241180)
Index Cond: (id = mark.element_id)
Buffers: shared hit=1197497 read=8403
I/O Timings: shared/local read=8475.963
-> Memoize (cost=0.29..0.31 rows=1 width=16) (actual time=0.000..0.000 rows=0 loops=277857)
Cache Key: element.model_id
Cache Mode: logical
Hits: 276820 Misses: 1037 Evictions: 0 Overflows: 0 Memory Usage: 82kB
Buffers: shared hit=3111
-> Index Scan using model_pkey on model (cost=0.28..0.30 rows=1 width=16) (actual time=0.005..0.005 rows=0 loops=1037)
Index Cond: (id = element.model_id)
Filter: (brand_id = '9cb22c38-af8a-4347-988e-1b2287122d39'::uuid)
Rows Removed by Filter: 1
Buffers: shared hit=3111
Planning:
Buffers: shared hit=1045
Planning Time: 1.985 ms
Execution Time: 10598.922 ms
(43 rows)
However, when I change random_page_cost to a lower value (=1.1) it change the plan and it doesn't use the memoize anyone:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=16808.48..16808.49 rows=1 width=8) (actual time=425.764..425.765 rows=0 loops=1)
Buffers: shared hit=348291 read=212
I/O Timings: shared/local read=173.481
-> Sort (cost=16808.48..16808.81 rows=129 width=8) (actual time=425.763..425.764 rows=0 loops=1)
Sort Key: mark.last_modified DESC
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=348291 read=212
I/O Timings: shared/local read=173.481 QUERY PLAN
-> Nested Loop (cost=1.27..16807.84 rows=129 width=8) (actual time=425.738..425.739 rows=0 loops=1)
Buffers: shared hit=348288 read=212
I/O Timings: shared/local read=173.481
-> Nested Loop (cost=0.84..12253.80 rows=10110 width=16) (actual time=0.041..55.468 rows=111456 loops=1)
Buffers: shared hit=14132
-> Index Scan using model_brand_id_95f0c5ac on model (cost=0.28..4.33 rows=3 width=16) (actual time=0.028..0.033 rows=3 loops=1)
Index Cond: (brand_id = '9cb22c38-af8a-4347-988e-1b2287122d39'::uuid)
Buffers: shared hit=4
-> Index Scan using element_model_id_c798104e on element (cost=0.56..4043.31 rows=3984 width=32) (actual time=0.009..14.143 rows=37152 loops=3)
Index Cond: (model_id = model.id)
Buffers: shared hit=14128
-> Index Scan using mark_element_id_4d370815 on mark (cost=0.42..0.44 rows=1 width=24) (actual time=0.003..0.003 rows=0 loops=111456)
Index Cond: (element_id = element.id)
Buffers: shared hit=334156 read=212
I/O Timings: shared/local read=173.481
Planning:
Buffers: shared hit=514
Planning Time: 0.649 ms
Execution Time: 425.799 ms
(27 rows)
Besides the fact that now the query performance is better, I would like to understand the relation between random_page_cost and the lake of use of memoize in the plan.
Cowabunga
(145 rep)
Jun 16, 2023, 04:31 PM
• Last activity: May 13, 2025, 05:03 PM
0
votes
1
answers
39
views
Why does PostgreSQL not check permissions of the table referencing (with ON DELETE CASCADE) a table being deleted from?
On PostgreSQL 15, I have the following objects: `appuser_locked` table is referencing `appuser` table with `ON DELETE CASCADE`. `application_role` is granted `DELETE` on `appuser` table but not on `appuser_locked` table. Why can `application_role` delete rows in `appuser_locked` by leveraging the `O...
On PostgreSQL 15, I have the following objects:
appuser_locked
table is referencing appuser
table with ON DELETE CASCADE
. application_role
is granted DELETE
on appuser
table but not on appuser_locked
table.
Why can application_role
delete rows in appuser_locked
by leveraging the ON DELETE CASCADE
when deleting from the appuser
table despite not having DELETE
granted on appuser_locked
?
Aren't the permissions on appuser_locked
table suppose to apply and prevent deletion?
Reproducible case:
------------------
CREATE ROLE owner_role WITH INHERIT LOGIN;
CREATE ROLE application_role WITH INHERIT LOGIN;
CREATE DATABASE mydb
TEMPLATE = template0
ENCODING = 'UTF8'
OWNER = owner_role;
\connect mydb
SET ROLE owner_role;
CREATE TABLE appuser(
appuser_id integer NOT NULL,
CONSTRAINT appuser_pkey PRIMARY KEY (appuser_id)
);
GRANT SELECT, INSERT, DELETE ON appuser TO application_role;
CREATE TABLE appuser_locked(
appuser_id integer NOT NULL,
CONSTRAINT appuser_locked_pkey PRIMARY KEY (appuser_id),
CONSTRAINT appuser_locked_fkey FOREIGN KEY (appuser_id)
REFERENCES appuser (appuser_id)
ON DELETE CASCADE
ON UPDATE CASCADE
);
COMMENT ON TABLE appuser_locked IS 'Indication that a given user is locked for modification by owner_role.';
GRANT SELECT ON appuser_locked TO application_role;
\dp+ appuser
\dp+ appuser_locked
INSERT INTO appuser (appuser_id)
VALUES
(1),
(2);
INSERT INTO appuser_locked (appuser_id)
VALUES
(1);
SET ROLE application_role;
TABLE appuser;
TABLE appuser_locked;
DELETE FROM appuser_locked;
TABLE appuser;
TABLE appuser_locked;
DELETE FROM appuser;
TABLE appuser;
TABLE appuser_locked;
Case output:
------------------
CREATE ROLE owner_role WITH INHERIT LOGIN;
CREATE ROLE
CREATE ROLE application_role WITH INHERIT LOGIN;
CREATE ROLE
CREATE DATABASE mydb
TEMPLATE = template0
ENCODING = 'UTF8'
OWNER = owner_role;
CREATE DATABASE
\connect mydb
You are now connected to database "mydb" as user "postgres".
SET ROLE owner_role;
SET
CREATE TABLE appuser(
appuser_id integer NOT NULL,
CONSTRAINT appuser_pkey PRIMARY KEY (appuser_id)
);
CREATE TABLE
GRANT SELECT, INSERT, DELETE ON appuser TO application_role;
GRANT
CREATE TABLE appuser_locked(
appuser_id integer NOT NULL,
CONSTRAINT appuser_locked_pkey PRIMARY KEY (appuser_id),
CONSTRAINT appuser_locked_fkey FOREIGN KEY (appuser_id)
REFERENCES appuser (appuser_id)
ON DELETE CASCADE
ON UPDATE CASCADE
);
CREATE TABLE
COMMENT ON TABLE appuser_locked IS 'Indication that a given user is locked for modification by owner_role.';
COMMENT
GRANT SELECT ON appuser_locked TO application_role;
GRANT
\dp+ appuser
Access privileges
Schema | Name | Type | Access privileges | Column privileges | Policies
--------+---------+-------+---------------------------------+-------------------+----------
public | appuser | table | owner_role=arwdDxt/owner_role +| |
| | | application_role=ard/owner_role | |
(1 row)
\dp+ appuser_locked
Access privileges
Schema | Name | Type | Access privileges | Column privileges | Policies
--------+----------------+-------+-------------------------------+-------------------+----------
public | appuser_locked | table | owner_role=arwdDxt/owner_role+| |
| | | application_role=r/owner_role | |
(1 row)
INSERT INTO appuser (appuser_id)
VALUES
(1),
(2);
INSERT 0 2
INSERT INTO appuser_locked (appuser_id)
VALUES
(1);
INSERT 0 1
SET ROLE application_role;
SET
TABLE appuser;
appuser_id
------------
1
2
(2 rows)
TABLE appuser_locked;
appuser_id
------------
1
(1 row)
DELETE FROM appuser_locked;
ERROR: permission denied for table appuser_locked
TABLE appuser;
appuser_id
------------
1
2
(2 rows)
TABLE appuser_locked;
appuser_id
------------
1
(1 row)
DELETE FROM appuser;
DELETE 2
TABLE appuser;
appuser_id
------------
(0 rows)
TABLE appuser_locked;
appuser_id
------------
(0 rows)
soliz
(293 rep)
May 9, 2025, 10:21 AM
• Last activity: May 9, 2025, 11:06 AM
3
votes
2
answers
652
views
Workaround logical replication & FOR EACH STATEMENT triggers
We are using logical replication to sync data into a table. Perfect solution so far! The problem now is we need triggers on this subscriber table. Those triggers make heavy calculations (think of calculating balances and aggregated data from the "raw" data in the subscriber table). That's why FOR EA...
We are using logical replication to sync data into a table. Perfect solution so far!
The problem now is we need triggers on this subscriber table. Those triggers make heavy calculations (think of calculating balances and aggregated data from the "raw" data in the subscriber table). That's why FOR EACH STATEMENT trigger would be preferable. We definitely wanna avoid that those calculations happen on every single row insert.
Documentation (https://www.postgresql.org/docs/current/logical-replication-architecture.html) says:
> The logical replication apply process currently only fires row triggers, not statement triggers.
Any ideas how we can achieve something similar without FOR EACH STATEMENT trigger? Keeping those two tables in sync - the subscriber table with the raw data & the table with calculated/aggregated data?
Tom
(131 rep)
Nov 30, 2022, 09:46 AM
• Last activity: Apr 18, 2025, 08:46 AM
2
votes
0
answers
33
views
Deadlock seems to occasionally and regularly lead to replication failure
We have two PostgreSQL databases running on RHEL virtual servers (`PostgreSQL 15.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-20), 64-bit`). Our production server/database which we'll call **Simon** runs Airflow to do ETL and then stores the data, and then our analyt...
We have two PostgreSQL databases running on RHEL virtual servers (
PostgreSQL 15.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-20), 64-bit
). Our production server/database which we'll call **Simon** runs Airflow to do ETL and then stores the data, and then our analytics server/database is **Garfunkel** and receives the data. Both virtual servers are theoretically in the same centre, so I don't think networking should be an issue. Nor do we think this issue could be caused by them running out of resources
The publication was set up on select tables and select columns, e.g.:
CREATE PUBLICATION prod_publication
FOR TABLE data_tables
WITH (publish = 'insert, update, delete', publish_via_partition_root = true);
And then the subscription on Garfunkel was created with
CREATE SUBSCRIPTION prod_data
CONNECTION 'host=simon port=5432 user=subscriber_bot dbname=data_warehouse'
PUBLICATION prod_raw_publication
WITH (connect = true, enabled = true, create_slot = false, slot_name = prod_data, synchronous_commit = 'off', binary = false, streaming = 'False', two_phase = false, disable_on_error = false);
Airflow is running on Simon, but its application database is a separate "database" on the server, which is not being published. Occasionally when backfilling data, Airflow results in a deadlock, which seems to lead to a failure of replication.
For example per this log on Simon:
2025-03-19 15:44:38.792 EDT ERROR: deadlock detected
2025-03-19 15:44:38.792 EDT DETAIL: Process 2354905 waits for ShareLock on transaction 157900530; blocked by process 2355302.
Process 2355302 waits for ShareLock on transaction 157900529; blocked by process 2354905.
Process 2354905: UPDATE dag_run SET last_scheduling_decision=NULL, updated_at='2025-03-19T19:44:34.528971+00:00'::timestamptz WHERE dag_run.id = 17606
Process 2355302: UPDATE task_instance SET state='scheduled', try_number=CASE WHEN (task_instance.state IS NULL OR task_instance.state != 'up_for_reschedule') THEN task_instance.try_number + 1 ELSE task_instance.try_number END, updated_at='2025-03-19T19:44:37.805821+00:00'::timestamptz WHERE task_instance.dag_id = 'unit_validation' AND task_instance.run_id = 'backfill__2025-02-28T10:00:00-05:00' AND (task_instance.task_id, task_instance.map_index) IN (('BasicValidation.t_staging_area', -1))
2025-03-19 15:44:38.792 EDT HINT: See server log for query details.
2025-03-19 15:44:38.792 EDT CONTEXT: while updating tuple (412,11) in relation "dag_run"
2025-03-19 15:44:38.792 EDT STATEMENT: UPDATE dag_run SET last_scheduling_decision=NULL, updated_at='2025-03-19T19:44:34.528971+00:00'::timestamptz WHERE dag_run.id = 17606
2025-03-19 15:44:48.804 EDT LOG: checkpoint starting: time
2025-03-19 15:48:53.836 EDT LOG: checkpoint complete: wrote 2104 buffers (12.8%); 0 WAL file(s) added, 0 removed, 0 recycled; write=245.029 s, sync=0.002 s, total=245.033 s; sync files=116, longest=0.001 s, average=0.001 s; distance=128470 kB, estimate=128470 kB
2025-03-19 15:49:48.895 EDT LOG: checkpoint starting: time
2025-03-19 15:50:52.679 EDT LOG: checkpoint complete: wrote 640 buffers (3.9%); 0 WAL file(s) added, 0 removed, 0 recycled; write=63.783 s, sync=0.001 s, total=63.785 s; sync files=38, longest=0.001 s, average=0.001 s; distance=9520 kB, estimate=116575 kB
2025-03-19 15:51:24.374 EDT LOG: could not receive data from client: Connection reset by peer
And a more recent example that also occurred after backfilling in Airflow.
2025-04-03 13:37:24.177 EDT LOG: terminating walsender process due to replication timeout
In both cases, dropping and recreating the subscription solves the issue, but this seems unideal.
What other information can we collect to help diagnose this issue? How can we be guarding against it?
raphael
(599 rep)
Apr 4, 2025, 03:36 PM
• Last activity: Apr 4, 2025, 03:41 PM
0
votes
1
answers
906
views
Could not Access File $libdir/vector No Such File or Directory
I am not a DBA. I am trying to test and experiment. I am on a Windows 11 machine and using PgAdmin. I have PostgreSQL 15 installed. When I run CREATE EXTENSION VECTOR, I get the following error message: Could not access file "$libdir/vector": No such file or directory How can I find out where "$libd...
I am not a DBA. I am trying to test and experiment.
I am on a Windows 11 machine and using PgAdmin. I have PostgreSQL 15 installed.
When I run CREATE EXTENSION VECTOR, I get the following error message:
Could not access file "$libdir/vector": No such file or directory
How can I find out where "$libdir" is pointing to?
**Edit #1:**
Here is what I used to install PGVector:
https://github.com/pgvector/pgvector#windows
**Edit # 2:**
Based on comments below, I thought $libdir was being set by PGRoot, but when I added a vector directory to

C:\Program Files\PostgreSQL\15
directory, the same error message came up.
Chicken Sandwich No Pickles
(117 rep)
Nov 21, 2024, 05:42 PM
• Last activity: Mar 29, 2025, 06:29 AM
2
votes
2
answers
226
views
(non-breaking space) is not considered whitespace by Postgres?
String-functions in Postgres do not recognize non-breaking-space as whitespace, neither when trimming nor when using regular expressions: ~~~pgsql select 'x' || test || 'x' , 'x' || trim(test) || 'x' , 'x' || regexp_replace(test, '\s+', '') || 'x' from (values (''), (' '), ('   y ...
String-functions in Postgres do not recognize non-breaking-space as whitespace, neither when trimming nor when using regular expressions:
~~~pgsql
select 'x' || test || 'x'
, 'x' || trim(test) || 'x'
, 'x' || regexp_replace(test, '\s+', '') || 'x'
from (values
(''),
(' '),
(' y '),
('s s s s')
) as foo(test)
~~~
(Not sure if the non-breaking ( ) survives in the above code, but the two last rows contain nbsp.)
Onecompiler SQL demonstration
Is it a Postgres-thing not to handle this, or is it a bug? I know of
char(160)
for nbsp, but would prefer en general-purpose way to strip all whitespace.
Collation in use is en_US.utf8
Rune
(123 rep)
Feb 27, 2025, 11:57 AM
• Last activity: Mar 5, 2025, 02:00 PM
1
votes
1
answers
207
views
PostgreSQL 15 Restore Failing Due to Incorrect UUID Handling in Dump
I'm facing an issue while restoring a PostgreSQL 15 database dump. During the restore process, I encounter the following errors: ``` ERROR: trailing junk after numeric literal at or near "..." ERROR: invalid command \N ERROR: invalid command \N . . . ERROR: out of memory ``` The error occurs after p...
I'm facing an issue while restoring a PostgreSQL 15 database dump. During the restore process, I encounter the following errors:
ERROR: trailing junk after numeric literal at or near "..."
ERROR: invalid command \N
ERROR: invalid command \N
.
.
.
ERROR: out of memory
The error occurs after processing the UUIDs in the dump file. It seems that the UUIDs are being interpreted incorrectly, possibly as numeric literals, which causes subsequent issues with invalid commands like \N, leading to an out of memory error.
Here is the command I'm using to create the dump:
pg_dump -h -p 5432 -U > new.dump
And the restore command:
psql postgresql://:@/ < new.dump
Could anyone advise on the best way to create a dump file that correctly handles UUIDs during the restore process to avoid these issues? Specifically, I want to ensure UUIDs are treated as UUIDs and not interpreted as numeric literals, which leads to the invalid commands and out of memory error.
Any help or best practices would be greatly appreciated!
Thanks!
**Additional info:**
I am restoring to a PostgreSQL 15 instance.
OS and version: Debian 12
pg_dump -V returns :
pg_dump (Postgresql) 16.2
Luna
(11 rep)
Feb 27, 2025, 07:30 PM
• Last activity: Feb 28, 2025, 08:00 PM
0
votes
1
answers
608
views
How to keep Image data in file system with Postgres database?
I am wondering is there any way to store image type data (binary data) in postgresql not directly into the columns but resides in the file system but not gonna store the path link as text in database but store as binary type data using any kind of 3rd party open source technology or anything else. I...
I am wondering is there any way to store image type data (binary data) in postgresql not directly into the columns but resides in the file system but not gonna store the path link as text in database but store as binary type data using any kind of 3rd party open source technology or anything else.
I am using postgresql version 15.
Saadman Karim
(21 rep)
Mar 2, 2023, 06:19 AM
• Last activity: Feb 22, 2025, 10:09 AM
2
votes
2
answers
831
views
Column used in the publication WHERE expression is not part of the replica identity
I'm trying to create a logical replication setup where I have a single large database with smaller read replicas branching of filtered on a sub id. For example, if I wanted a subset of items pushed to read replicas for a specific customer, I might have a table on the replication node defined by: ```...
I'm trying to create a logical replication setup where I have a single large database with smaller read replicas branching of filtered on a sub id.
For example, if I wanted a subset of items pushed to read replicas for a specific customer, I might have a table on the replication node defined by:
CREATE TABLE public.item (
id text PRIMARY KEY,
data text,
customer_id integer
);
I may or may not have sub_id
on the subscription node (not a deal breaker detail). On the replication node, I can add data to the table as I please **until** I create the PUBLICATION like so:
CREATE PUBLICATION customer_items FOR TABLE public.item (
id,
data
) WHERE (customer_id = 3);
At this point, I try something like
UPDATE public.item SET
customer_id = 3 WHERE
id = 'GLAMDRING';
I get the error I get an SQL state: 42P10
error, or..:
ERROR: cannot update table "item"
DETAIL: Column used in the publication WHERE expression is not part of the replica identity.
STATEMENT: UPDATE public.item SET
customer_id = $1::integer WHERE
id = 'GLAMDRING';
The issue here looks similar . I've recreating the table to remove any ALTER TABLE statements on id
, as suggested there, but I have the same result.
Most examples I can find that use this logical replication function never make use of the WHERE clause. I'm thinking I've either misunderstood the meaning behind this feature, or I've found a bug/limitation.
**Please note:** I am aware that my example seems trivial and should be solved some other way. This is a simplified version of my problem for the sake of brevity.
-----
**Update**
Trying to use Unique index, as suggested almost got me there.
CREATE UNIQUE INDEX id_customerid_idx ON public.item ((customer_id || id));
But when I try creating a useful PUBLICATION..
CREATE PUBLICATION customer_items FOR TABLE public.item (id, data) WHERE (customer_id || id LIKE '3%');
New error:
ERROR: User-defined or built-in mutable functions are not allowed.invalid publication WHERE expression
ERROR: invalid publication WHERE expression
SQL state: 0A000
Detail: User-defined or built-in mutable functions are not allowed.
Character: 377
Maybe my use-case is a little wild or I need to rethink my application logic to work with these constraints. Am I missing an obvious work around here?
goldfishalpha
(123 rep)
Jan 15, 2024, 03:23 PM
• Last activity: Feb 6, 2025, 05:26 PM
1
votes
1
answers
96
views
Replay lag on replica even after setting max_standby_streaming_delay
I've a 1 master and 5 replicas postgresql cluster configured as hot standby using streaming replication in synchronous mode. I've started to see 10' replay delays in one of the sync replicas while normally the delay is in the ms order. The only difference I see from `pg_stat_replication` is replay_l...
I've a 1 master and 5 replicas postgresql cluster configured as hot standby using streaming replication in synchronous mode.
I've started to see 10' replay delays in one of the sync replicas while normally the delay is in the ms order.
The only difference I see from
pg_stat_replication
is replay_lag which I would expect its caused by a buffer pin. But I've set max_standby_streaming_delay
and max_standby_archive_delay
to 60sec. Also, hot_standby_feedback
is on. So, I'd expect any buffer pin to kill the read queries on the standby within the 1' and replay lag to maintain below the minute.
The only log lines I could see when the event happens are related to a query being cancelled:
/var/log/postgresql/postgresql-2025-01-27_170445.log:2025-01-27 17:49:42 UTC LOG: recovery restart point at 4ABC/1A7DEF98
/var/log/postgresql/postgresql-2025-01-27_174958.log:2025-01-27 17:54:41 UTC LOG: recovery restart point at 4ABC/34D14738
/var/log/postgresql/postgresql-2025-01-27_174958.log:2025-01-27 17:58:49 UTC ERROR: canceling statement due to conflict with recovery
/var/log/postgresql/postgresql-2025-01-27_174958.log:2025-01-27 17:58:49 UTC ERROR: canceling statement due to conflict with recovery
/var/log/postgresql/postgresql-2025-01-27_174958.log:2025-01-27 17:58:49 UTC ERROR: canceling statement due to conflict with recovery
/var/log/postgresql/postgresql-2025-01-27_174958.log:2025-01-27 17:58:49 UTC ERROR: canceling statement due to conflict with recovery
/var/log/postgresql/postgresql-2025-01-27_174958.log:2025-01-27 17:58:49 UTC ERROR: canceling statement due to conflict with recovery
/var/log/postgresql/postgresql-2025-01-27_174958.log:2025-01-27 17:58:49 UTC FATAL: terminating connection due to conflict with recovery
/var/log/postgresql/postgresql-2025-01-27_174958.log:2025-01-27 17:59:41 UTC LOG: recovery restart point at 4ABC/51921288
Also not sure if this is related to the fact that recoveries logs are processed every ~5'
Nico Gatti
Jan 27, 2025, 06:03 PM
• Last activity: Jan 28, 2025, 05:04 AM
0
votes
1
answers
134
views
My postgres database startup has been running for almost 2 days+
I messed up my pgsql database by mistakenly deleting files on `pg_wal`, it resulted to a non-recoverable state (with no backups to restore with) so i decided to run the `pg_resetwal.exe -f` (considering the data loss). After `resetwal`, I ran: `postgres -D /u01/pgsql/15/data`, and since wednesday (1...
I messed up my pgsql database by mistakenly deleting files on
pg_wal
, it resulted to a non-recoverable state (with no backups to restore with) so i decided to run the pg_resetwal.exe -f
(considering the data loss).
After resetwal
, I ran: postgres -D /u01/pgsql/15/data
, and since wednesday (12/18/2024) and up to this date (12/20/2024) the startup is still running? Was this normal? How long does it usually take to finish starting-up? Or Should i terminate the running PID for the startup and rebuild the database from scratch?
Sample logs as of date:
2024-12-20 08:06:46.367 PST LOCATION: LogCheckpointStart, xlog.c:6099
2024-12-20 08:06:46.477 PST LOG: 00000: checkpoint complete: wrote 2 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.102 s, sync=0.001 s, total=0.110 s; sync files=2, longest=0.001 s, average=0.001 s; distance=16380 kB, estimate=16383 kB
2024-12-20 08:06:46.477 PST LOCATION: LogCheckpointEnd, xlog.c:6188
Dave B
(1 rep)
Dec 20, 2024, 12:11 AM
• Last activity: Dec 20, 2024, 02:52 PM
0
votes
0
answers
110
views
pgAudit Install Not Working Because of Makefile (Postgres 15)
Hi My Makefile supposedly doesnt have an install target. How do I fix this? I dont understand how it even worked for others. Also, I cant use an updated version of Postgres due to restrictions at my job, for anyone wondering. Also, here's the instructions I followed: Clone the pgAudit extension: ```...
Hi My Makefile supposedly doesnt have an install target. How do I fix this?
I dont understand how it even worked for others. Also, I cant use an updated version of Postgres due to restrictions at my job, for anyone wondering.
Also, here's the instructions I followed:
Clone the pgAudit extension:
git clone https://github.com/pgaudit/pgaudit.git
Change to pgAudit directory:
cd pgaudit
Checkout REL_15_STABLE
branch (note that the stable branch may not exist for unreleased versions of PostgreSQL):
git checkout REL_15_STABLE
Build and install pgAudit:
make install USE_PGXS=1 PG_CONFIG=/usr/pgsql-15/bin/pg_config
-----
This is the error:
process_begin: CreateProcess(NULL, sh.exe -c "/usr/pgsql-15/bin/pg_config --pgxs", ...) failed.
Makefile:15: pipe: No error
make: *** No rule to make target 'install'. Stop.
This is my Makefile, which I didnt change from what I found at https://github.com/pgaudit/pgaudit/releases/tag/1.7.0
# contrib/pg_audit/Makefile
MODULE_big = pgaudit
OBJS = pgaudit.o $(WIN32RES)
EXTENSION = pgaudit
DATA = pgaudit--1.7.sql
PGFILEDESC = "pgAudit - An audit logging extension for PostgreSQL"
REGRESS = pgaudit
REGRESS_OPTS = --temp-config=$(top_srcdir)/contrib/pgaudit/pgaudit.conf
ifdef USE_PGXS
PG_CONFIG = pg_config
PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)
else
subdir = contrib/pgaudit
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
include $(top_srcdir)/contrib/contrib-global.mk
endif
EXTRA_INSTALL += contrib/pg_stat_statements
MDBA95
(1 rep)
Dec 6, 2024, 11:04 PM
• Last activity: Dec 7, 2024, 01:16 AM
0
votes
1
answers
122
views
How to reset stats in "pg_stats_subscription_stats"?
I have some errors produced while making changes to some logically replicated tables. Now I want to reset the errors logged in the `pg_stat_subscription_stats` table in the `apply_error_count` column, but I can't find *anything* anywhere telling me how to do it. I've tried: - `SELECT pg_stat_stateme...
I have some errors produced while making changes to some logically replicated tables. Now I want to reset the errors logged in the
pg_stat_subscription_stats
table in the apply_error_count
column, but I can't find *anything* anywhere telling me how to do it.
I've tried:
- SELECT pg_stat_statements_reset()
- SELECT pg_stat_reset()
But these didn't reset this field ...
How do I reset this field?
For reference, the below is combining pg_stats_subscriptions
and pg_stat_subscription_stats
and shows the apply_error_count
values:

Applez00800
(145 rep)
Oct 4, 2024, 07:44 AM
• Last activity: Dec 1, 2024, 09:41 PM
39
votes
3
answers
72443
views
Collation version mismatch
I replicated a database, it's working fine but when I enter the replication database it displays the following message: > The database was created using collation version 2.31, but the operating system provides version 2.35. > > Rebuild all objects in this database that use the default collation and...
I replicated a database, it's working fine but when I enter the replication database it displays the following message:
> The database was created using collation version 2.31, but the operating system provides version 2.35.
>
> Rebuild all objects in this database that use the default collation and run ALTER DATABASE template1 REFRESH COLLATION VERSION, or build PostgreSQL with the right library version.
PgPool is also not working with this replication database, it only works when I remove it.
Pgpool gives the following error when trying to login:
child pid 5286: ERROR: unable to read message kind
child pid 5286: DETAIL: kind does not match between main(53) slot (4e)
The same message also appears in the postgresql logs.
How do I rebuild my database?
Miqueias Kevison
(750 rep)
Mar 11, 2023, 12:49 PM
• Last activity: Nov 1, 2024, 04:55 PM
0
votes
0
answers
208
views
How to set up Postgresql pglogical replication on the same database instance
I am trying to assess [pglogical][1] replication usage instead of the built-in one (particularly due to being able to filter by operation). I am using **a single** Azure Postgresql flexible server database instance where I have created two databases (main and replica). I have done the following setu...
I am trying to assess pglogical replication usage instead of the built-in one (particularly due to being able to filter by operation). I am using **a single** Azure Postgresql flexible server database instance where I have created two databases (main and replica).
I have done the following setup:
### Main database
In the logs I could only find worker-related information such as (not sure if it is related to the replication or not):
> pg_qs: qs bgworker: worker woke up and performed operations, with
> query data store as on pg_qs: qs bgworker: worker finished
> successfully in 85.484192 ms, with query store as on,
> default_transaction_read_only as off and having_schema as true manager
> worker at slot 1 generation 2 detaching cleanly
### Message table schema and indexes
SELECT pglogical.create_node(
node_name := 'main_node',
dsn := 'host=localhost port=5432 dbname=main user=x password=***'
);
-- check that the node is there
SELECT * FROM pglogical.node;
SELECT pglogical.create_replication_set(
set_name := 'custom_set',
replicate_insert := true,
replicate_update := true,
replicate_delete := false
);
-- start with a single replicated table
SELECT pglogical.replication_set_add_table('custom_set', 'public."Message"');
-- check the configuration
select * from pglogical.tables
select * from pglogical.replication_set
select * from pglogical.replication_set_table
-- or better with
SELECT rs.set_name, n.nspname AS schema_name, c.relname AS table_name
FROM pglogical.replication_set_table rst
JOIN pg_class c ON rst.set_reloid = c.oid
JOIN pg_namespace n ON c.relnamespace = n.oid
JOIN pglogical.replication_set rs ON rst.set_id = rs.set_id
WHERE rs.set_name = 'custom_set';
I have about 10K rows in the public."Message"
on the main database.
### Replica
SELECT pglogical.create_node(
node_name := 'replica_node',
dsn := 'host=localhost port=5432 dbname=replica user=x password=*'
);
SELECT pglogical.create_subscription(
subscription_name := 'main_to_replica_subscription',
provider_dsn := 'host=localhost port=5432 dbname=main user=x password=*',
replication_sets := ARRAY['custom_set']
);
-- check the objects
SELECT * FROM pglogical.subscription;
-- this shows 'streaming'
SELECT *
FROM pglogical.show_subscription_status();
The initial data is replicated fine. However, when inserting more data in the main table things begin to go astray:
- the replication status is sometimes catching up
or replicating
and does not seem to end (come back to streaming
).
- the replication speed seems to be very slow (I expect sub-100ms for such low data volumes)
- the following query indicates a growing number of bytes to be replicated
SELECT application_name,
state,
pg_current_wal_lsn() AS current_wal_lsn,
replay_lsn,
pg_wal_lsn_diff(pg_current_wal_lsn(), replay_lsn) AS bytes_pending
FROM pg_stat_replication
- database connections seem to be randomly dropped (e.g. pgAdmin begin reporting that the connection to the server was lost). This issue disappears if the subscription is disabled.
**I assume there is something wrong with the setup when done on the same database instance. Is there any workaround?**
select version()
reports: PostgreSQL 15.7 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit.
**Notes**: I know from setting up the builtin PostgreSQL replication, that using the same instance required an explicit logical slot to avoid the replication getting stuck:
SELECT pg_create_logical_replication_slot('custom_slot', 'pgoutput');
However, I cannot find a similar workaround for the pglogical
replication.
On a very similar setup (the same database instance), the builtin replication worked with the workaround mentioned above.
### Relevant PostgreSQL server logs
The only error related to pglogical I could find in the server logs looks like this:
> ...-LOG: starting pglogical database manager for database replica
>
> ...-LOG: received replication command: CREATE_REPLICATION_SLOT
> \"pgl_localdev3d6043b_main_node_main_to_0246a7b\" LOGICAL
> pglogical_output","statement": "CREATE_REPLICATION_SLOT
> \"pgl_localdev3d6043b_main_node_main_to_0246a7b\" LOGICAL
> pglogical_output
>
> ...-LOG: logical decoding found consistent point at
> 314/E005A1F0","detail": "There are no running
> transactions.","statement": "CREATE_REPLICATION_SLOT
> \"pgl_localdev3d6043b_main_node_main_to_0246a7b\" LOGICAL
> pglogical_output
>
> -LOG: exported logical decoding snapshot: \"0000001B-000006BE-1\" with 0 transaction IDs","statement": "CREATE_REPLICATION_SLOT
> \"pgl_localdev3d6043b_main_node_main_to_0246a7b\" LOGICAL
> pglogical_output
>
> -LOG: received replication
> command: START_REPLICATION SLOT
> \"pgl_localdev3d6043b_main_node_main_to_0246a7b\" LOGICAL 314/E005A228
> (expected_encoding 'UTF8', min_proto_version '1', max_proto_version
> '1', startup_params_format '1', \"binary.want_internal_basetypes\"
> '1', \"binary.want_binary_basetypes\" '1',
> \"binary.basetypes_major_version\" '1500', \"binary.sizeof_datum\"
> '8', \"binary.sizeof_int\" '4', \"binary.sizeof_long\" '8',
> \"binary.bigendian\" '0', \"binary.float4_byval\" '0',
> \"binary.float8_byval\" '1', \"binary.integer_datetimes\" '0',
> \"hooks.setup_function\" 'pglogical.pglogical_hooks_setup',
> \"pglogical.forward_origins\" '\"all\"',
> \"pglogical.replication_set_names\" 'custom_set',
> \"relmeta_cache_size\" '-1', pg_version '150007', pglogical_version
> '2.4.2', pglogical_version_num '20402', pglogical_apply_pid
> '10812')","statement": "START_REPLICATION SLOT
> \"pgl_localdev3d6043b_main_node_main_to_0246a7b\" LOGICAL 314/E005A228
> (expected_encoding 'UTF8', min_proto_version '1', max_proto_version
> '1', startup_params_format '1', \"binary.want_internal_basetypes\"
> '1', \"binary.want_binary_basetypes\" '1',
> \"binary.basetypes_major_version\" '1500', \"binary.sizeof_datum\"
> '8', \"binary.sizeof_int\" '4', \"binary.sizeof_long\" '8',
> \"binary.bigendian\" '0', \"binary.float4_byval\" '0',
> \"binary.float8_byval\" '1', \"binary.integer_datetimes\" '0',
> \"hooks.setup_function\" 'pglogical.pglogical_hooks_setup',
> \"pglogical.forward_origins\" '\"all\"',
> \"pglogical.replication_set_names\" 'custom_set',
> \"relmeta_cache_size\" '-1', pg_version '150007', pglogical_version
> '2.4.2', pglogical_version_num '20402', pglogical_apply_pid '10812')
>
>
> -LOG: starting logical decoding for slot \"pgl_localdev3d6043b_main_node_main_to_0246a7b\"","detail":
> "Streaming transactions committing after 314/E005A228, reading WAL
> from 314/E005A1F0.","statement": "START_REPLICATION SLOT
> \"pgl_localdev3d6043b_main_node_main_to_0246a7b\" LOGICAL 314/E005A228
> (expected_encoding 'UTF8', min_proto_version '1', max_proto_version
> '1', startup_params_format '1', \"binary.want_internal_basetypes\"
> '1', \"binary.want_binary_basetypes\" '1',
> \"binary.basetypes_major_version\" '1500', \"binary.sizeof_datum\"
> '8', \"binary.sizeof_int\" '4', \"binary.sizeof_long\" '8',
> \"binary.bigendian\" '0', \"binary.float4_byval\" '0',
> \"binary.float8_byval\" '1', \"binary.integer_datetimes\" '0',
> \"hooks.setup_function\" 'pglogical.pglogical_hooks_setup',
> \"pglogical.forward_origins\" '\"all\"',
> \"pglogical.replication_set_names\" 'custom_set',
> \"relmeta_cache_size\" '-1', pg_version '150007', pglogical_version
> '2.4.2', pglogical_version_num '20402', pglogical_apply_pid '10812')
>
>
> ...-LOG: connection received: host=127.0.0.1 port=43598 2024-10-16
> 07:54:21 UTC-670f712d.3189-LOG: connection authenticated:
> identity=\"CN=azuresu.b32b9622594c.database.azure.com\" method=cert
> (/datadrive/pg/data/pg_hba.conf:9) ...-LOG: connection authorized:
> user=azuresu database=postgres application_name=psql SSL enabled
> (protocol=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384, bits=256)
> ...-LOG:
> disconnection: session time: 0:00:00.165 user=azuresu
> database=postgres host=127.0.0.1 port=43598
> ...-LOG: disconnection:
> session time: 0:00:20.997 user=azuresu database=postgres
> host=127.0.0.1 port=51290
> ...-LOG: connection received:
> host=127.0.0.1 port=39900
> ...-LOG: connection authenticated:
> identity=\"CN=azuresu.b32b9622594c.database.azure.com\" method=cert
> (/datadrive/pg/data/pg_hba.conf:9)
> ...-LOG: connection authorized:
> user=azuresu database=postgres SSL enabled (protocol=TLSv1.3,
> cipher=TLS_AES_256_GCM_SHA384, bits=256) 2024-10-16 07:54:30
> UTC-670f7122.313b-LOG: disconnection: session time: 0:00:20.044
> user=azuresu database=azure_maintenance host=127.0.0.1 port=51300
> ...-LOG: could not send data to client: Connection reset by
> peer","context": "slot
> \"pgl_localdev3d6043b_main_node_main_to_0246a7b\", output plugin
> \"pglogical_output\", in the change callback, associated LSN
> 314/E43B3278","statement": "START_REPLICATION SLOT
> \"pgl_localdev3d6043b_main_node_main_to_0246a7b\" LOGICAL 314/E005A228
> (expected_encoding 'UTF8', min_proto_version '1', max_proto_version
> '1', startup_params_format '1', \"binary.want_internal_basetypes\"
> '1', \"binary.want_binary_basetypes\" '1',
> \"binary.basetypes_major_version\" '1500', \"binary.sizeof_datum\"
> '8', \"binary.sizeof_int\" '4', \"binary.sizeof_long\" '8',
> \"binary.bigendian\" '0', \"binary.float4_byval\" '0',
> \"binary.float8_byval\" '1', \"binary.integer_datetimes\" '0',
> \"hooks.setup_function\" 'pglogical.pglogical_hooks_setup',
> \"pglogical.forward_origins\" '\"all\"',
> \"pglogical.replication_set_names\" 'custom_set',
> \"relmeta_cache_size\" '-1', pg_version '150007', pglogical_version
> '2.4.2', pglogical_version_num '20402', pglogical_apply_pid '13850')
I have several such errors. I assume that they might be related to my disabling the subscription (the first step to cleanup the replication).
### Workers related information
The Azure configuration related to the workers is as follows:

CREATE TABLE IF NOT EXISTS public."Message"
(
"Id" uuid NOT NULL,
"RequestRef" uuid NOT NULL,
"RefId" uuid,
"ReasonCode" citext COLLATE pg_catalog."default" NOT NULL,
"StatusCode" citext COLLATE pg_catalog."default" NOT NULL,
"IsChargeable" boolean NOT NULL,
"DestinationIdentifier" character varying(32) COLLATE pg_catalog."default" NOT NULL,
"DestCode" character varying(2) COLLATE pg_catalog."default" NOT NULL,
"Sender" citext COLLATE pg_catalog."default" NOT NULL,
"SenderTypeCode" citext COLLATE pg_catalog."default" NOT NULL,
"Sector" citext COLLATE pg_catalog."default",
"Vars" citext COLLATE pg_catalog."default",
"Network" citext COLLATE pg_catalog."default",
"IsPiiRemoved" boolean NOT NULL,
"SentTimestamp" timestamp with time zone,
"CreateTimestamp" timestamp with time zone NOT NULL,
"UpdatedAt" timestamp with time zone NOT NULL,
"OrderNo" bigint NOT NULL GENERATED ALWAYS AS IDENTITY ( INCREMENT 1 START 1 MINVALUE 1 MAXVALUE 9223372036854775807 CACHE 1 ),
"NormalizedDestinationIdentifier" character varying(32) COLLATE pg_catalog."default" NOT NULL DEFAULT ''::character varying,
"MsgDirection" character varying(16) COLLATE pg_catalog."default" NOT NULL DEFAULT 'Unknown'::character varying,
"CharSet" citext COLLATE pg_catalog."default" NOT NULL DEFAULT ''::citext,
"ConvId" uuid,
"IsFixedLine" boolean NOT NULL DEFAULT false,
"IsViewed" boolean,
"ViewedBy" uuid,
CONSTRAINT "PK_Message" PRIMARY KEY ("Id")
)
TABLESPACE pg_default;
-- Index: IX_Message_ConvId
-- DROP INDEX IF EXISTS public."IX_Message_ConvId";
CREATE INDEX IF NOT EXISTS "IX_Message_ConvId"
ON public."Message" USING btree
("ConvId" ASC NULLS LAST)
TABLESPACE pg_default;
-- Index: IX_Message_NormalizedDestinationIdentifier_Sender
-- DROP INDEX IF EXISTS public."IX_Message_NormalizedDestinationIdentifier_Sender";
CREATE INDEX IF NOT EXISTS "IX_Message_NormalizedDestinationIdentifier_Sender"
ON public."Message" USING btree
("NormalizedDestinationIdentifier" COLLATE pg_catalog."default" ASC NULLS LAST, "Sender" COLLATE pg_catalog."default" ASC NULLS LAST)
TABLESPACE pg_default;
-- Index: IX_Message_RequestRef
-- DROP INDEX IF EXISTS public."IX_Message_RequestRef";
CREATE INDEX IF NOT EXISTS "IX_Message_RequestRef"
ON public."Message" USING btree
("RequestRef" ASC NULLS LAST)
TABLESPACE pg_default;
Besides DataDog integration, I am measuring the replication delay for inserts using the following two Message table columns in the replica:
"ReplicatedTimestamp" timestamp with time zone NOT NULL DEFAULT (now() AT TIME ZONE 'UTC'::text),
"InsertReplicationDelay" bigint NOT NULL GENERATED ALWAYS AS (((EXTRACT(epoch FROM ("ReplicatedTimestamp" - "CreateTimestamp")) * (1000)::numeric))::bigint) STORED,
For the built-in replication, at about 150 inserts/second in the Message table, 99.9% of the values in InsertReplicationDelay are less than 80ms (databases on the same database instance for now).
Alexei
(1191 rep)
Oct 16, 2024, 09:52 AM
• Last activity: Oct 24, 2024, 09:10 AM
0
votes
0
answers
94
views
Unable to execute psql with postgres user
After a server failure I'm not able to execute psql like `psql -U postgres` nor like `sudo -u postgres psql`. In both cases I'm asked for the user password despite I've modified the `pg_hba.conf` files to trust all local connections and I've tried virtually all proposed solutions along the web. Woul...
After a server failure I'm not able to execute psql like
*
*
psql -U postgres
nor like sudo -u postgres psql
. In both cases I'm asked for the user password despite I've modified the pg_hba.conf
files to trust all local connections and I've tried virtually all proposed solutions along the web. Would it be possible to delete the postgres user and re-create it without loosing the data? That's assuming the issue might be that the user got corrupted during the system failure.
## Detailed info
* Logs at /var/lib/pgsql/15/data/log
are all empty.
* Service status looks like:

/var/lib/pgsql/15/data/pg_hba.conf
looks like -after some changes to try to login wihout password:

/var/lib/pgsql/data/pg_hba.conf
looks like:

La Vora Dev
(1 rep)
Oct 15, 2024, 04:12 PM
• Last activity: Oct 16, 2024, 04:39 PM
2
votes
1
answers
420
views
UNNEST vs ANY()
When selecting a number of records from a Postgres table based on a list of ids passed as a parameter, what are the benefits and drawbacks of ```lang-sql SELECT id, FROM tbl INNER JOIN UNNEST($1) AS id USING (id) ``` versus ```lang-sql SELECT id, FROM tbl WHERE id = ANY($1) ``` and in which cases on...
When selecting a number of records from a Postgres table based on a list of ids passed as a parameter, what are the benefits and drawbacks of
-sql
SELECT id,
FROM tbl
INNER JOIN UNNEST($1) AS id USING (id)
versus
-sql
SELECT id,
FROM tbl
WHERE id = ANY($1)
and in which cases one should be preferred to the other?
DB: Postgres 14 and above.
**Note:** This is not a duplicate of https://dba.stackexchange.com/q/260856/193683 since the list is not constant.
Alex O
(123 rep)
Oct 4, 2024, 07:54 PM
• Last activity: Oct 4, 2024, 09:20 PM
0
votes
1
answers
299
views
Barman using old pg_start_backup on Postgresql-15
I'm trying to backup my server using barman but it's taking error on backup here's the check root@barman:~# barman check My_server Server Postgres_UniContent: PostgreSQL: OK superuser or standard user with backup privileges: OK PostgreSQL streaming: OK wal_level: OK replication slot: OK directories:...
I'm trying to backup my server using barman but it's taking error on backup
here's the check
root@barman:~# barman check My_server
Server Postgres_UniContent:
PostgreSQL: OK
superuser or standard user with backup privileges: OK
PostgreSQL streaming: OK
wal_level: OK
replication slot: OK
directories: OK
retention policy settings: OK
backup maximum age: OK (no last_backup_maximum_age provided)
backup minimum size: OK (0 B)
wal maximum age: OK (no last_wal_maximum_age provided)
wal size: OK (0 B)
compression settings: OK
failed backups: OK (there are 0 failed backups)
minimum redundancy requirements: OK (have 0 backups, expected at least 0)
ssh: OK (PostgreSQL server)
systemid coherence: OK
pg_receivexlog: OK
pg_receivexlog compatible: OK
receive-wal running: OK
archive_mode: OK
archive_command: OK
continuous archiving: OK
archiver errors: OK
you guys see everything is okay but then when I start backup
root@barman:~# barman backup My_server
Starting backup using rsync-concurrent method for server My_server in /barman/My_server/base/20241002T184311
ERROR: Backup failed issuing start backup command.
DETAILS: pg_start_backup command: function pg_start_backup(unknown, boolean, boolean) does not exist
LINE 1: ...heckpoint()) AS timeline, now() AS timestamp FROM pg_start_b...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
So, as I see it's using pg_start_backup(unknown, boolean, boolean) but in Postgresql-15 the function name changed into pg_backup_start(). But I already grant the function to barman user
grantee | function_schema | routine_name | privilege_type
---------+-----------------+-------------------------+----------------
barman | postgres | pg_backup_start | EXECUTE
barman | postgres | pg_backup_stop | EXECUTE
barman | postgres | pg_create_restore_point | EXECUTE
barman | postgres | pg_switch_wal | EXECUTE
(4 rows)
postgres=# GRANT EXECUTE ON FUNCTION pg_backup_start(text, boolean) to barman;
GRANT
postgres=# GRANT EXECUTE ON FUNCTION pg_backup_stop(boolean) to barman;
GRANT
postgres=# GRANT pg_checkpoint TO barman;
GRANT ROLE
here is the diagnose information
root@barman:/barman# barman diagnose
{
"global": {
"config": {
"barman_home": "/barman",
"barman_user": "barman",
"configuration_files_directory": "/etc/barman.d",
"errors_list": [],
"log_file": "/barman/barman.log",
"log_level": "DEBUG"
},
"system_info": {
"barman_ver": "2.18",
"kernel_ver": "Linux barman 5.15.0-60-generic #66-Ubuntu SMP Fri Jan 20 14:29:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux",
"python_ver": "",
"release": "Distributor ID:\tUbuntu\nDescription:\tUbuntu 22.04.5 LTS\nRelease:\t22.04\nCodename:\tjammy",
"rsync_ver": "rsync version 3.2.7 protocol version 31",
"ssh_ver": "",
"timestamp": "Wed Oct 2 19:19:08 2024"
}
},
"servers": {
"My_server": {
"backups": {
"20241002T184311": {
"backup_id": "20241002T184311",
"backup_label": null,
"begin_offset": null,
"begin_time": null,
"begin_wal": null,
"begin_xlog": null,
"config_file": "/etc/postgresql/15/main/postgresql.conf",
"copy_stats": null,
"deduplicated_size": null,
"end_offset": null,
"end_time": null,
"end_wal": null,
"end_xlog": null,
"error": "failure issuing start backup command (pg_start_backup command: function pg_start_backup(unknown, boolean, boolean) does not exist)",
"hba_file": "/etc/postgresql/15/main/pg_hba.conf",
"ident_file": "/etc/postgresql/15/main/pg_ident.conf",
"included_files": null,
"mode": "rsync-concurrent",
"pgdata": "/pg_data/main",
"server_name": "My_server",
"size": null,
"status": "FAILED",
"systemid": "7412892106944724954",
"tablespaces": null,
"timeline": null,
"version": 150008,
"xlog_segment_size": 16777216
}
},
"config": {
"active": true,
"archiver": true,
"archiver_batch_size": 0,
"backup_directory": "/barman/My_server",
"backup_method": "rsync",
"backup_options": "concurrent_backup",
"bandwidth_limit": null,
"barman_home": "/barman",
"barman_lock_directory": "/barman",
"basebackup_retry_sleep": 30,
"basebackup_retry_times": 0,
"basebackups_directory": "/barman/My_server/base",
"check_timeout": 30,
"compression": null,
"conninfo": "host=IP_ADDRESS user=barman dbname=postgres",
"create_slot": "auto",
"custom_compression_filter": null,
"custom_compression_magic": null,
"custom_decompression_filter": null,
"description": "Univision content watch information collecting server",
"disabled": false,
"errors_directory": "/barman/My_server/errors",
"forward_config_path": false,
"immediate_checkpoint": false,
"incoming_wals_directory": "/barman/My_server/incoming",
"last_backup_maximum_age": null,
"last_backup_minimum_size": null,
"last_wal_maximum_age": null,
"max_incoming_wals_queue": null,
"minimum_redundancy": 0,
"msg_list": [],
"name": "My_server",
"network_compression": false,
"parallel_jobs": 2,
"path_prefix": "/usr/lib/postgresql/15/bin",
"post_archive_retry_script": null,
"post_archive_script": null,
"post_backup_retry_script": null,
"post_backup_script": null,
"post_delete_retry_script": null,
"post_delete_script": null,
"post_recovery_retry_script": null,
"post_recovery_script": null,
"post_wal_delete_retry_script": null,
"post_wal_delete_script": null,
"pre_archive_retry_script": null,
"pre_archive_script": null,
"pre_backup_retry_script": null,
"pre_backup_script": null,
"pre_delete_retry_script": null,
"pre_delete_script": null,
"pre_recovery_retry_script": null,
"pre_recovery_script": null,
"pre_wal_delete_retry_script": null,
"pre_wal_delete_script": null,
"primary_ssh_command": null,
"recovery_options": "",
"retention_policy": "window 30 d",
"retention_policy_mode": "auto",
"reuse_backup": null,
"slot_name": "on",
"ssh_command": "ssh postgres@IP_ADDRESS",
"streaming_archiver": true,
"streaming_archiver_batch_size": 0,
"streaming_archiver_name": "barman_receive_wal",
"streaming_backup_name": "barman_streaming_backup",
"streaming_conninfo": "host=IP_ADDRESS user=streaming_barman dbname=postgres",
"streaming_wals_directory": "/barman/My_server/streaming",
"tablespace_bandwidth_limit": null,
"wal_retention_policy": "simple-wal 30 d",
"wals_directory": "/barman/My_server/wals"
},
"status": {
"archive_command": "rsync -a %p barman@BARMAN_IPADD:/barman/My_server/incoming/%f",
"archive_mode": "on",
"archive_timeout": 0,
"archived_count": 6,
"checkpoint_timeout": 300,
"config_file": "/etc/postgresql/15/main/postgresql.conf",
"connection_error": null,
"current_archived_wals_per_second": 3.2541333136528836e-06,
"current_lsn": "A/CD000148",
"current_size": 34695948524.0,
"current_xlog": "000000010000000A000000CD",
"data_checksums": "off",
"data_directory": "/pg_data/main",
"failed_count": 0,
"has_backup_privileges": true,
"hba_file": "/etc/postgresql/15/main/pg_hba.conf",
"hot_standby": "on",
"ident_file": "/etc/postgresql/15/main/pg_ident.conf",
"is_archiving": true,
"is_in_recovery": false,
"is_superuser": true,
"last_archived_time": "Wed Oct 2 18:41:58 2024",
"last_archived_wal": "000000010000000A000000CC",
"last_failed_time": null,
"last_failed_wal": null,
"max_replication_slots": "4",
"max_wal_senders": "4",
"pg_receivexlog_compatible": true,
"pg_receivexlog_installed": true,
"pg_receivexlog_path": "/usr/lib/postgresql/15/bin/pg_receivewal",
"pg_receivexlog_supports_slots": true,
"pg_receivexlog_synchronous": false,
"pg_receivexlog_version": "15.8",
"pgespresso_installed": false,
"postgres_systemid": "7412892106944724954",
"replication_slot": [
"on",
true,
"A/CD000000"
],
"replication_slot_support": true,
"server_txt_version": "15.8",
"stats_reset": "Wed Sep 11 11:09:01 2024",
"streaming": true,
"streaming_supported": true,
"streaming_systemid": "7412892106944724954",
"synchronous_standby_names": [
""
],
"timeline": 1,
"wal_compression": "off",
"wal_keep_size": "0",
"wal_level": "replica",
"xlog_segment_size": 16777216,
"xlogpos": "A/CD000148"
},
"system_info": {
"kernel_ver": "Linux unicontent 5.15.0-119-generic #129-Ubuntu SMP Fri Aug 2 19:25:20 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux",
"python_ver": "",
"release": "Distributor ID:\tUbuntu\nDescription:\tUbuntu 22.04.2 LTS\nRelease:\t22.04\nCodename:\tjammy",
"rsync_ver": "rsync version 3.2.7 protocol version 31",
"ssh_ver": ""
},
"wals": {
"last_archived_wal_per_timeline": {}
}
}
}
}
Please help me to troubleshoot this barman difficulties.
Mendbayar Alzakhgui
(11 rep)
Oct 2, 2024, 11:22 AM
• Last activity: Oct 3, 2024, 06:27 AM
Showing page 1 of 20 total questions