Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

5 votes
2 answers
2354 views
List PostgreSQL extensions per database
I have a Postgres 9.5 server with 30 databases. I would like to find out which database uses which extension(s) in one go, preferably in a query that is usable in pgAdmin. I know that ```SELECT * FROM pg_extension``` gives me the extensions for the database I am connected to. I also know that ```SEL...
I have a Postgres 9.5 server with 30 databases. I would like to find out which database uses which extension(s) in one go, preferably in a query that is usable in pgAdmin. I know that
* FROM pg_extension
gives me the extensions for the database I am connected to. I also know that
* FROM pg_available_extensions
lists, well, all available extensions. I also know that
* FROM pg_database
lists all existing databases. **Question** How can I create a list that includes the database name and the extensions used in each database?
copossum (151 rep)
May 17, 2021, 07:36 AM • Last activity: Aug 1, 2025, 04:08 PM
0 votes
2 answers
33 views
How to correctly dumpall a PostgreSQL cluster when in the newer version one extension is separated into 2 extensions
I have a PostgreSQL version 10 and a PostGIS extension. In this old version of PostgreSQL, the PostGIS has also support for raster. But in the newer versions of PostGIS, the raster support in a separate extension called: postgis_raster. Thus, we wont have this line in our dumpall file: ``` CREATE EX...
I have a PostgreSQL version 10 and a PostGIS extension. In this old version of PostgreSQL, the PostGIS has also support for raster. But in the newer versions of PostGIS, the raster support in a separate extension called: postgis_raster. Thus, we wont have this line in our dumpall file:
CREATE EXTENSION postgis_raster;
And when I restore it it tells me it does not recognize the raster type! My file is very big if I do not zip it before storing it on the disk. If I zip it, I wont be able to change the lines in the file to add this extension to the dump file manually. I was thinking to do a pgdumpall with --global-only flag. Then, later dump each DB one by one using pgdump. However, I was afraid that I may miss a detail from my DB cluster. Is there a way to ask the pgdumpall to consider that postgis_raster is a separate extension and should be added to the dump file? How can I safely dump and restore my cluster?
milad (101 rep)
Jul 29, 2025, 03:09 PM • Last activity: Jul 31, 2025, 07:47 AM
0 votes
1 answers
150 views
Regression testing Issue [make installcheck]
### For the regression tests: After running `make installcheck` on `my_extension` getting error: ```error echo "+++ regress install-check in +++" && /Users/spartacus/.pgenv/pgsql-15.0/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress --inputdir=./ --bindir='/Users/spartacus/.pgenv/pgsql-15.0/...
### For the regression tests: After running make installcheck on my_extension getting error:
echo "+++ regress install-check in  +++" && /Users/spartacus/.pgenv/pgsql-15.0/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress --inputdir=./ --bindir='/Users/spartacus/.pgenv/pgsql-15.0/bin'    --dbname=contrib_regression my_extension--regress.sql
+++ regress install-check in  +++
(using postmaster on Unix socket, default port)
============== dropping database "contrib_regression" ==============
psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: FATAL:  role "spartacus" does not exist
command failed: "/Users/spartacus/.pgenv/pgsql-15.0/bin/psql" -X -c "SET client_min_messages = warning" -c "DROP DATABASE IF EXISTS \"contrib_regression\"" "postgres"
make: *** [installcheck] Error 2
How to solve this on macOS? #### The code for the makefile is:
EXTENSION = my_extension
DATA = my_extension--1.0.sql
REGRESS = my_extension--regress.sql

PG_CONFIG  ?= pg_config
PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)
#### Regression testing SQL file my_extension--regress.sql:
-- regression test script for my_extension

-- Create the my_table table
CREATE TABLE my_table (
  id SERIAL PRIMARY KEY,
  name VARCHAR(100) NOT NULL
);

-- Test the my_function function
SELECT my_function();

-- Verify the data in the my_table table
SELECT * FROM my_table;
Tried to set the environment variable PGUSER PGUSER=postgres make installcheck Got the error:
echo "+++ regress install-check in  +++" && /Users/spartacus/.pgenv/pgsql-15.0/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress --inputdir=./ --bindir='/Users/spartacus/.pgenv/pgsql-15.0/bin'    --dbname=contrib_regression my_extension--regress.sql
+++ regress install-check in  +++
(using postmaster on Unix socket, default port)
============== dropping database "contrib_regression" ==============
SET
DROP DATABASE
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
ALTER DATABASE
ALTER DATABASE
ALTER DATABASE
ALTER DATABASE
ALTER DATABASE
============== running regression test queries        ==============
test my_extension--regress.sql    ... /bin/sh: /Users/spartacus/Desktop/GSoC/CODE/my_extension/sql/my_extension--regress.sql.sql: No such file or directory
diff: /Users/spartacus/Desktop/GSoC/CODE/my_extension/expected/my_extension--regress.sql.out: No such file or directory
diff: /Users/spartacus/Desktop/GSoC/CODE/my_extension/results/my_extension--regress.sql.out: No such file or directory
diff command failed with status 512: diff  "/Users/spartacus/Desktop/GSoC/CODE/my_extension/expected/my_extension--regress.sql.out" "/Users/spartacus/Desktop/GSoC/CODE/my_extension/results/my_extension--regress.sql.out" > "/Users/spartacus/Desktop/GSoC/CODE/my_extension/results/my_extension--regress.sql.out.diff"
make: *** [installcheck] Error 2
Ishaan Adarsh (133 rep)
Jun 26, 2023, 09:47 AM • Last activity: Jul 22, 2025, 06:06 PM
3 votes
1 answers
9482 views
Is it possible to add Postgres extensions to AWS Aurora?
I am trying to futureproof a Postgres database so it can be transferred to AWS Aurora. I am using the guide extension `uuid-ossp`. Is it possible to add PostGres extensions to AWS Aurora? In particular, I am interested in whether the extension `uuid-ossp` can be used in Aurora.
I am trying to futureproof a Postgres database so it can be transferred to AWS Aurora. I am using the guide extension uuid-ossp. Is it possible to add PostGres extensions to AWS Aurora? In particular, I am interested in whether the extension uuid-ossp can be used in Aurora.
mikelus (343 rep)
Jun 11, 2020, 04:18 PM • Last activity: Jun 26, 2025, 11:06 PM
3 votes
1 answers
441 views
Installing Postgresql parray_gin extension on mac, include not found
I am trying to install `parray_gin` in PostgreSQL 10 on macOS, but I can't get it to build. I'm running `pgxn install parray_gin`. This first gave me a missing `stdio.h` error, so I installed the xcode command line tools, and now it's complaining about /Library/PostgreSQL/10/include/postgresql/serve...
I am trying to install parray_gin in PostgreSQL 10 on macOS, but I can't get it to build. I'm running pgxn install parray_gin. This first gave me a missing stdio.h error, so I installed the xcode command line tools, and now it's complaining about /Library/PostgreSQL/10/include/postgresql/server/utils/pg_locale.h:19:10: fatal error: 'unicode/ucol.h' file not found After some Googling it appeared I needed to install an extra library, which I did through brew install icu4c This printed: This formula is keg-only, which means it was not symlinked into /usr/local, because macOS provides libicucore.dylib (but nothing else). If you need to have this software first in your PATH run: echo 'export PATH="/usr/local/opt/icu4c/bin:$PATH"' >> ~/.bash_profile echo 'export PATH="/usr/local/opt/icu4c/sbin:$PATH"' >> ~/.bash_profile For compilers to find this software you may need to set: LDFLAGS: -L/usr/local/opt/icu4c/lib CPPFLAGS: -I/usr/local/opt/icu4c/include But it didn't change a thing and I have no idea how to pass /usr/local/opt/icu4c/include to pgxn
Joris (131 rep)
Feb 28, 2018, 08:16 AM • Last activity: Jun 5, 2025, 08:05 PM
1 votes
1 answers
271 views
Can I do a backup without extension functions?
I use the PostGIS extension on my database. The thing is when I do a backup and then a restore, all the PostGIS functions give a warning during the restore process, because the functions already exist, as the extension is already installed. I have to scroll through all the warning to make sure there...
I use the PostGIS extension on my database. The thing is when I do a backup and then a restore, all the PostGIS functions give a warning during the restore process, because the functions already exist, as the extension is already installed. I have to scroll through all the warning to make sure there isn't a real error. So what is the right way to backup / restore, so I can have a clean restore process?
Juan Carlos Oropeza (427 rep)
Dec 2, 2019, 04:26 PM • Last activity: May 20, 2025, 04:02 AM
0 votes
1 answers
905 views
Could not Access File $libdir/vector No Such File or Directory
I am not a DBA. I am trying to test and experiment. I am on a Windows 11 machine and using PgAdmin. I have PostgreSQL 15 installed. When I run CREATE EXTENSION VECTOR, I get the following error message: Could not access file "$libdir/vector": No such file or directory How can I find out where "$libd...
I am not a DBA. I am trying to test and experiment. I am on a Windows 11 machine and using PgAdmin. I have PostgreSQL 15 installed. When I run CREATE EXTENSION VECTOR, I get the following error message: Could not access file "$libdir/vector": No such file or directory How can I find out where "$libdir" is pointing to? **Edit #1:** Here is what I used to install PGVector: enter image description here https://github.com/pgvector/pgvector#windows **Edit # 2:** Based on comments below, I thought $libdir was being set by PGRoot, but when I added a vector directory to C:\Program Files\PostgreSQL\15 directory, the same error message came up.
Chicken Sandwich No Pickles (117 rep)
Nov 21, 2024, 05:42 PM • Last activity: Mar 29, 2025, 06:29 AM
0 votes
0 answers
39 views
ERROR: called Result::unwrap()on anErr value: ()
I was working in pgml extension. I installed and ran it through docker. When I tried to tune an existing model named cssupport/t5-small-awesome-text-to-sql based on my new dataset (this dataset is stored in a table) through **pgml.train()**, I got this issue >postgresml=# SELECT * FROM pgml.train(\...
I was working in pgml extension. I installed and ran it through docker. When I tried to tune an existing model named cssupport/t5-small-awesome-text-to-sql based on my new dataset (this dataset is stored in a table) through **pgml.train()**, I got this issue >postgresml=# SELECT * FROM pgml.train(\ project_name => 'odoo_sql_generator',\ task => 'text2text-generation',\ relation_name => 'public.training_pairs',\ y_column_name => 'target_text',\ hyperparams => '{\ "model": "cssupport/t5-small-awesome-text-to-sql",\ "learning_rate": 5e-5,\ "batch_size": 8,\ "epochs": 3,\ "pretrained": true\ }'::JSONB\ );\ ERROR: called Result::unwrap() on an Err value: ()
Shiju Sivadazz (1 rep)
Mar 6, 2025, 04:27 AM • Last activity: Mar 6, 2025, 09:32 AM
2 votes
1 answers
2895 views
Can't enable wal2json in postgres 12 docker image
I'm trying to enable wal2json in an image extended from `postgres:12` image. But i'm getting an error: ``` postgres=# create extension wal2json; ERROR: could not open extension control file "/usr/share/postgresql/12/extension/wal2json.control": No such file or directory ``` This might be because wal...
I'm trying to enable wal2json in an image extended from postgres:12 image. But i'm getting an error:
postgres=# create extension wal2json;
ERROR:  could not open extension control file "/usr/share/postgresql/12/extension/wal2json.control": No such file or directory
This might be because wal2json is also missing from pg_available_extensions:
postgres=# select * from pg_available_extensions where name ilike '%%json%%';
 name | default_version | installed_version | comment 
------+-----------------+-------------------+---------
(0 rows)
Dockerfile:
FROM postgres:12
RUN apt update && apt install -y postgresql-12-wal2json postgresql-contrib
postgres server logs:
db_1  | 2020-05-10 13:53:04.611 GMT  LOG:  database system is ready to accept connections
db_1  | 2020-05-10 13:54:03.587 GMT  ERROR:  could not open extension control file "/usr/share/postgresql/12/extension/wal2json.control": No such file or directory
db_1  | 2020-05-10 13:54:03.587 GMT  STATEMENT:  create extension wal2json;
the postgresql.conf file has been tweaked to
wal_level = logical		
shared_preload_libraries = 'wal2json'
Any help with getting this working will be appreciated.
Ayush (123 rep)
May 10, 2020, 02:12 PM • Last activity: Feb 9, 2025, 05:18 AM
0 votes
1 answers
48 views
CloudSQL for Postgres S3 extension
In AWS RDS for Postgres, there is an extension called AWS_S3 that provides functions within Postgres that allow me to import data directly from a bucket into a table and export data from a table directly into a bucket. Example: ``` SELECT aws_s3.table_import_from_s3( 'test_gzip', '', '(csv format)',...
In AWS RDS for Postgres, there is an extension called AWS_S3 that provides functions within Postgres that allow me to import data directly from a bucket into a table and export data from a table directly into a bucket. Example:
SELECT aws_s3.table_import_from_s3(
   'test_gzip', '', '(csv format)',
   'myS3Bucket', 'test-data.gz', 'us-east-2'
);
There's nothing similar in CloudSQL for Postgres. Has anyone had this type of problem? How can I solve it?
Jose Rocha (45 rep)
Mar 17, 2024, 02:27 AM • Last activity: Jan 3, 2025, 07:34 AM
17 votes
1 answers
36236 views
Postgresql enabling extensions without super-user
I have a PostgreSQL 9.5 server on which I have scripts that create roles and databases for users automatically. Within these databases it would be helpful to enable specific extensions (e.g. pgcrypto), but as I understand it one must be a superuser to run `CREATE EXTENSION`. Is there a way to enable...
I have a PostgreSQL 9.5 server on which I have scripts that create roles and databases for users automatically. Within these databases it would be helpful to enable specific extensions (e.g. pgcrypto), but as I understand it one must be a superuser to run CREATE EXTENSION. Is there a way to enable such extensions without manually logging in with a superuser account?
beldaz (1750 rep)
Jun 3, 2017, 01:26 AM • Last activity: Dec 4, 2024, 04:40 PM
0 votes
0 answers
146 views
Run `partman.create_parent` Idempotently
# Problem Description I'm using the Postgres extension pg_partman 5.1 to partition a set of tables inside of a number of initialization scripts. I'm using these scripts both to setup remote databases and as part of a local docker testing environment. as part of this setup process I'm running `partma...
# Problem Description I'm using the Postgres extension pg_partman 5.1 to partition a set of tables inside of a number of initialization scripts. I'm using these scripts both to setup remote databases and as part of a local docker testing environment. as part of this setup process I'm running partman.create_parent which I can successfully run once. But I'm not able to get this to run a second time due to a key constraint from an insert that's contained in the function that's called. I would love to be able to run these setup scripts multiple times as developers make small tweaks to the system. I could always destroy and re-create the entire system from scratch in the docker environment but if I can re-run the init scripts and get no-ops if this has already been run once that would be excellent. Another option would be checking to see if the public.all_observations key already exists and if so don't run the query but I'm not 100% sure to accomplish that. # Specific Statement
SELECT partman.create_parent(
        p_parent_table :='public.all_observations',
        p_control :='ts',
        p_type :='range',
        p_interval :='1 day',
        p_start_partition :='2024-05-01',
        p_template_table :='public.all_observations_template');
# Things I've Tried
DO $$ BEGIN
    SELECT partman.create_parent(
        p_parent_table :='public.all_observations',
        p_control :='ts',
        p_type :='range',
        p_interval :='1 day',
        p_start_partition :='2024-05-01',
        p_template_table :='public.all_observations_template');
EXCEPTION
       WHEN unique_violation THEN null;
END $$;
# Error I'm Hitting
Traceback (most recent call last):
  File "/Users/alexlordthorsen/git/iocurrents-services/configuration_management/helper_scripts/bootstrap_dbs.py", line 189, in 
    main()
  File "/Users/alexlordthorsen/git/iocurrents-services/configuration_management/helper_scripts/bootstrap_dbs.py", line 185, in main
    run_bootstrap_file(conn, file_path, env, template_args)
  File "/Users/alexlordthorsen/git/iocurrents-services/configuration_management/helper_scripts/bootstrap_dbs.py", line 147, in run_bootstrap_file
    cursor.execute(sql)
psycopg2.errors.RaiseException: duplicate key value violates unique constraint "part_config_parent_table_pkey"
CONTEXT: SQL statement "INSERT INTO partman.part_config (
        parent_table
        , partition_type
        , partition_interval
        , epoch
        , control
        , premake
        , constraint_cols
        , datetime_string
        , automatic_maintenance
        , jobmon
        , template_table
        , inherit_privileges
        , default_table
        , date_trunc_interval)
    VALUES (
        p_parent_table
        , p_type
        , v_time_interval
        , p_epoch
        , p_control
        , p_premake
        , p_constraint_cols
        , v_datetime_string
        , p_automatic_maintenance
        , p_jobmon
        , v_template_schema||'.'||v_template_tablename
        , v_inherit_privileges
        , p_default_table
        , p_date_trunc_interval)"
PL/pgSQL function partman.create_parent(text,text,text,text,text,integer,text,boolean,text,text[],text,boolean,text) line 345 at SQL statement
SQL statement "SELECT partman.create_parent(
        p_parent_table :='public.all_observations',
        p_control :='ts',
        p_type :='range',
        p_interval :='1 day',
        p_start_partition :='2024-05-01',
        p_template_table :='public.all_observations_template')"
PL/pgSQL function inline_code_block line 2 at SQL statement
DETAIL: Key (parent_table)=(public.all_observations) already exists.
HINT:
CONTEXT:  PL/pgSQL function partman.create_parent(text,text,text,text,text,integer,text,boolean,text,text[],text,boolean,text) line 647 at RAISE
SQL statement "SELECT partman.create_parent(
        p_parent_table :='public.all_observations',
        p_control :='ts',
        p_type :='range',
        p_interval :='1 day',
        p_start_partition :='2024-05-01',
        p_template_table :='public.all_observations_template')"
PL/pgSQL function inline_code_block line 2 at SQL statement
AlexLordThorsen (133 rep)
Oct 30, 2024, 06:23 PM
1 votes
1 answers
52 views
Optimizing Query to Retrieve Names from pg_buffercache View in PostgreSQL
I had a database with large amount of tables / partition tables where I need to retrieve the database name, relation name, parent relation name and tablespace name instead of their respective OIDs from pg_buffercache view. My current query involves joining `pg_buffercache` with `pg_database`, `pg_cl...
I had a database with large amount of tables / partition tables where I need to retrieve the database name, relation name, parent relation name and tablespace name instead of their respective OIDs from pg_buffercache view. My current query involves joining pg_buffercache with pg_database, pg_class, pg_inherits and pg_tablespace to replace the OIDs with the relevant names. However, this query becomes extremely slow when dealing with a large number of tables. Here is the query I am using:
SELECT 
    bufferid,
    pgc1.relname AS relation_name,
    pgc2.relname AS parent_relation_name,
    pgt.spcname AS tablespace_name,
    CASE 
        WHEN pgb.reldatabase = 0 THEN 'Shared system catalogs' 
        ELSE pgd.datname 
    END AS database_name,
    relforknumber,
    relblocknumber,
    isdirty,
    usagecount,
    pinning_backends
FROM 
    pg_buffercache AS pgb
LEFT JOIN 
    pg_class AS pgc1 
    ON pgb.reldatabase IN (0, (SELECT oid FROM pg_database WHERE datname = current_database()))
    AND pgb.relfilenode = pg_relation_filenode(pgc1.oid)
LEFT JOIN 
    pg_inherits AS pgi 
    ON pgc1.oid = pgi.inhrelid
LEFT JOIN 
    pg_class AS pgc2 
    ON pgi.inhparent = pgc2.oid
LEFT JOIN 
    pg_tablespace AS pgt 
    ON pgb.reltablespace = pgt.oid
LEFT JOIN 
    pg_database AS pgd 
    ON pgb.reldatabase = pgd.oid;
Here is the query plan generated with lot of Seq Scan
postgres=> explain analyze SELECT bufferid, pgc1.relname, pgc2.relname, pgt.spcname, CASE WHEN pgb.reldatabase = 0 THEN 'Shared system catalogs' ELSE pgd.datname END, relforknumber, relblocknumber, isdirty, usagecount, pinning_backends FROM pg_buffercache AS pgb LEFT JOIN pg_class AS pgc1 ON pgb.reldatabase IN (0, (SELECT oid FROM pg_database where datname = current_database())) AND pgb.relfilenode = pg_relation_filenode(pgc1.oid) LEFT JOIN pg_inherits AS pgi ON pgc1.oid = pgi.inhrelid LEFT JOIN pg_class AS pgc2 ON pgi.inhparent = pgc2.oid LEFT JOIN pg_tablespace AS pgt ON pgb.reltablespace = pgt.oid LEFT JOIN pg_database AS pgd ON pgb.reldatabase = pgd.oid;
                                                                                 QUERY PLAN                                                                                  
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop Left Join  (cost=197913.73..306625.37 rows=27197 width=277) (actual time=75543.679..202806.553 rows=66659578 loops=1)
   InitPlan 1 (returns $0)
     ->  Seq Scan on pg_database  (cost=0.00..1.03 rows=1 width=4) (actual time=0.018..0.019 rows=1 loops=1)
           Filter: (datname = current_database())
           Rows Removed by Filter: 3
   ->  Nested Loop Left Join  (cost=197912.28..267521.16 rows=27197 width=221) (actual time=75543.666..149336.903 rows=66659578 loops=1)
         ->  Merge Left Join  (cost=197911.85..253672.28 rows=27197 width=221) (actual time=75543.637..96045.317 rows=66659578 loops=1)
               Merge Cond: (p.relfilenode = (pg_relation_filenode((pgc1.oid)::regclass)))
               Join Filter: (p.reldatabase = ANY (ARRAY['0'::oid, $0]))
               Rows Removed by Join Filter: 616
               ->  Sort  (cost=67.28..69.78 rows=1000 width=157) (actual time=74868.339..82287.389 rows=66659578 loops=1)
                     Sort Key: p.relfilenode
                     Sort Method: external merge  Disk: 1583216kB
                     ->  Hash Left Join  (cost=2.09..17.45 rows=1000 width=157) (actual time=17376.288..46385.909 rows=66659578 loops=1)
                           Hash Cond: (p.reldatabase = pgd.oid)
                           ->  Hash Left Join  (cost=1.05..13.73 rows=1000 width=93) (actual time=17376.266..37425.982 rows=66659578 loops=1)
                                 Hash Cond: (p.reltablespace = pgt.oid)
                                 ->  Function Scan on pg_buffercache_pages p  (cost=0.00..10.00 rows=1000 width=33) (actual time=17376.226..25074.313 rows=66659578 loops=1)
                                 ->  Hash  (cost=1.02..1.02 rows=2 width=68) (actual time=0.023..0.024 rows=3 loops=1)
                                       Buckets: 1024  Batches: 1  Memory Usage: 9kB
                                       ->  Seq Scan on pg_tablespace pgt  (cost=0.00..1.02 rows=2 width=68) (actual time=0.015..0.016 rows=3 loops=1)
                           ->  Hash  (cost=1.02..1.02 rows=2 width=68) (actual time=0.009..0.010 rows=4 loops=1)
                                 Buckets: 1024  Batches: 1  Memory Usage: 9kB
                                 ->  Seq Scan on pg_database pgd  (cost=0.00..1.02 rows=2 width=68) (actual time=0.004..0.005 rows=4 loops=1)
               ->  Materialize  (cost=197844.57..200564.23 rows=543931 width=68) (actual time=675.255..893.286 rows=1001258 loops=1)
                     ->  Sort  (cost=197844.57..199204.40 rows=543931 width=68) (actual time=675.247..763.910 rows=490273 loops=1)
                           Sort Key: (pg_relation_filenode((pgc1.oid)::regclass))
                           Sort Method: external merge  Disk: 43176kB
                           ->  Seq Scan on pg_class pgc1  (cost=0.00..123714.31 rows=543931 width=68) (actual time=0.016..448.284 rows=539414 loops=1)
         ->  Index Scan using pg_inherits_relid_seqno_index on pg_inherits pgi  (cost=0.42..0.50 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=66659578)
               Index Cond: (inhrelid = pgc1.oid)
   ->  Index Scan using pg_class_oid_index on pg_class pgc2  (cost=0.42..1.44 rows=1 width=68) (actual time=0.000..0.000 rows=0 loops=66659578)
         Index Cond: (oid = pgi.inhparent)
 Planning Time: 1.101 ms
 Execution Time: 206171.454 ms
(35 rows)
How can I optimize this query to make it more efficient (in terms of execution time)? Are there any specific techniques or best practices to ensure the joins use the indexes already created on pg_class and other catalog tables effectively?
Ayush Vatsa (55 rep)
Jul 30, 2024, 11:40 AM • Last activity: Jul 30, 2024, 12:25 PM
0 votes
1 answers
116 views
are pg_cron and pgaudit database specific extensions? or is one installation per server enough?
I'm quite versed in postgresql, but new to any extension other than postgis. for example, one of the first things I do when setting up an instance is to install postgis in template1, so it's available in every db I create (most of them handle geospatial data). I'd like something similar for pg_cron...
I'm quite versed in postgresql, but new to any extension other than postgis. for example, one of the first things I do when setting up an instance is to install postgis in template1, so it's available in every db I create (most of them handle geospatial data). I'd like something similar for pg_cron and pgaudit, but I'm not sure it's required. are these extensions "one per cluster" or do they need installing in each separate database? I believe pgaudit is per server, as it records everything, but confirmation would be helpful. pg_cron I'm not so sure about. thanks all!
Chris (185 rep)
Jun 17, 2024, 05:45 PM • Last activity: Jun 17, 2024, 06:13 PM
5 votes
3 answers
2500 views
how to match pg_repack binary and db version
I am using this command ` sudo apt-get install postgresql-11-repack` to install pg_repack(default to 1.4.8). postgresql-11-repack is already the newest version (1.4.8-1.pgdg20.04+1). when i run "create extension pg_repack" in rds postgres 11, it creates by default with version 1.4.4. List of install...
I am using this command sudo apt-get install postgresql-11-repack to install pg_repack(default to 1.4.8). postgresql-11-repack is already the newest version (1.4.8-1.pgdg20.04+1). when i run "create extension pg_repack" in rds postgres 11, it creates by default with version 1.4.4. List of installed extensions Name | Version | Schema | Description --------------+---------+------------+-------------------------------------------------------------- pg_repack | 1.4.4 | public | Reorganize tables in PostgreSQL databases with minimal locks so pg_repack fails due to mismatch in version number between ec2 and db. ERROR: pg_repack failed with error: program 'pg_repack 1.4.8' does not match database library 'pg_repack 1.4.4' 1) how do i create repack extension with version 1.4.8 in DB? 2) if above is not possible, how do i downgrade by pg_repack version on ec2 back to 1.4.4 same as my db. 3) AWS does not allow me to add this extension under "shared_preload_libraries" either. Error compiling from source: gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -fno-omit-frame-pointer pg_repack.o pgut/pgut.o pgut/pgut-fe.o -L/usr/lib/x86_64-linux-gnu -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -L/usr/lib/llvm-10/lib -L/usr/lib/x86_64-linux-gnu/mit-krb5 -Wl,--as-needed -L/usr/lib/x86_64-linux-gnu -lpq -L/usr/lib/postgresql/11/lib -lpgcommon -lpgport -lpthread -lssl -lcrypto -lz -lrt -lcrypt -ldl -lm -o pg_repack /usr/bin/ld: cannot find -lpgcommon /usr/bin/ld: cannot find -lpgport collect2: error: ld returned 1 exit status make: *** [/usr/lib/postgresql/11/lib/pgxs/src/makefiles/pgxs.mk:430: pg_repack] Error 1 make: Leaving directory '/home/user/pg_repack-ver_1.4.4/bin' make: *** [Makefile:35: all] Error 2 enter code here
Falcon (101 rep)
Dec 19, 2022, 08:57 PM • Last activity: Apr 26, 2024, 04:30 PM
2 votes
1 answers
167 views
How to DROP a function in postgres extension upgrade script with required cleanup
Suppose we have a hypothetical extension named as xyz with version 1.0. It has `xyz--1.0.sql` and `xyz.c` file. I have declared function named fun() in `xyz--1.0.sql` file and its definition in `xyz.c` file. Now I wanted to drop this function in the next upgrade i.e `xyz--1.0--1.1` so I will use `DR...
Suppose we have a hypothetical extension named as xyz with version 1.0. It has xyz--1.0.sql and xyz.c file. I have declared function named fun() in xyz--1.0.sql file and its definition in xyz.c file. Now I wanted to drop this function in the next upgrade i.e xyz--1.0--1.1 so I will use DROP FUNCTION fun(); in it and remove the definition from xyz.c file. Here my doubt is wouldn't xyz--1.0 will complain about missing definition of fun() and if yes how can I clean up my function definition in xyz.c file?
Ayush Vatsa (55 rep)
Feb 8, 2024, 06:41 PM • Last activity: Feb 15, 2024, 12:19 PM
0 votes
1 answers
108 views
PostgreSQL stored C function is 60 times slower than standalone C code
I am trying to build stored function in C, which will take one array of bigint as argument, and return another array which is copy of parameter array but with all elements incremented by 1. My problem is that when I run this code as standalone program, it runs very fast: 0.1s for 10M elements, but i...
I am trying to build stored function in C, which will take one array of bigint as argument, and return another array which is copy of parameter array but with all elements incremented by 1. My problem is that when I run this code as standalone program, it runs very fast: 0.1s for 10M elements, but it takes 6s when run as Postgresql C stored procedure. I am wondering if someone can spot if I am doing anything obviously wrong?.. My C stored procedure:
#include "postgres.h"
#include "fmgr.h"
#include "utils/builtins.h"
#include "utils/array.h"
#include "catalog/pg_type.h"

PG_MODULE_MAGIC;

#define ARRPTR(x)  ( (int64 *) ARR_DATA_PTR(x) )
#define ARRNELEMS(x)  ArrayGetNItems(ARR_NDIM(x), ARR_DIMS(x))

PG_FUNCTION_INFO_V1(test_inc);
Datum
test_inc(PG_FUNCTION_ARGS)
{
    ArrayType  *a = PG_GETARG_ARRAYTYPE_P(0);
    int n = ARRNELEMS(a);
    int nbytes = ARR_OVERHEAD_NONULLS(1) + sizeof(int64) * n;

    ArrayType  *r = (ArrayType *) palloc0(nbytes);
    SET_VARSIZE(r, nbytes);
    ARR_NDIM(r) = 1;
    r->dataoffset = 0; // marker for no null bitmap
    ARR_ELEMTYPE(r) = INT8OID;
    ARR_DIMS(r) = n;
    ARR_LBOUND(r) = 1;

    int64 *ad = ARRPTR(a);
    int64 *rd = ARRPTR(r);

    ereport(WARNING,
        errcode(ERRCODE_WARNING),
        errmsg("Before loop"));
    for (int i = 0; i 
#include 

int main(int argc, char ** argv) {
    long n = 10000000;
    long *a = (long*)malloc(n * sizeof(long));
    long *r = (long*)malloc(n * sizeof(long));
    for (int i = 0; i < n; i++) {
    	a[i] = i;
    }
    for (int i = 0; i < n; i++) {
    	r[i] = a[i] + 1;
    }
    // To make sure compiler does not remove previous loop because result is unused.
    long res = 0;
    for (int i = 0; i < n; i++) {
    	res += r[i];
    }

    printf("%ld", res);
}
Riku Iki (11 rep)
Jan 20, 2024, 05:15 AM • Last activity: Jan 20, 2024, 05:25 PM
0 votes
1 answers
49 views
Is there any way to get the comment string for an extension, as set in the extension control file?
I'm writing a foreign data wrapper program as a PostgreSQL extension. Is there any way to get the comment string set in the extension control file from my C program? The comment was created by PostgreSQL when I ran `CREATE EXTENSION myFDW`.
I'm writing a foreign data wrapper program as a PostgreSQL extension. Is there any way to get the comment string set in the extension control file from my C program? The comment was created by PostgreSQL when I ran CREATE EXTENSION myFDW.
yasoo7964 (33 rep)
Oct 26, 2023, 08:08 AM • Last activity: Oct 30, 2023, 07:38 AM
1 votes
0 answers
69 views
Writing .spec files for PostgreSQL isolation tests
I am developing a PostgreSQL extension for which I would like to run some tests with concurrent connections. If I understand [the documentation](https://www.postgresql.org/docs/current/extend-pgxs.html) correctly, I should be able to do so using the PGXS framework that comes with Postgres. However,...
I am developing a PostgreSQL extension for which I would like to run some tests with concurrent connections. If I understand [the documentation](https://www.postgresql.org/docs/current/extend-pgxs.html) correctly, I should be able to do so using the PGXS framework that comes with Postgres. However, it is mentioned that such isolation tests (listed in the ISOLATION variable in my Makefile) have to be specified in a .spec file, and nowhere can I find a mention of how to write such a .spec file. I've failed to find an example in the documentation and I've similarly failed to find example code online. I know how to write the .sql tests that one can list in the REGRESS variable. Thus, my question is not about how to write tests in general. I specifically would like to know how to write a .spec file.
BigSmoke (814 rep)
Sep 1, 2023, 11:47 AM
0 votes
2 answers
653 views
PostgreSQL query using BRIN index is slow until REINDEX/VACUUM
PostgreSQL 15.2. I have a partitioned table like this: ``` create table event ( dataset_id integer not null, event_id integer not null, second_offset integer not null, -- other columns ) partition by list (dataset_id); ``` Partitions of this table have a PK on `event_id` and a BRIN index on `second_...
PostgreSQL 15.2. I have a partitioned table like this:
create table event
(
    dataset_id integer not null,
    event_id integer not null,
    second_offset integer not null,
    -- other columns
) partition by list (dataset_id);
Partitions of this table have a PK on event_id and a BRIN index on second_offset:
create index event_DATASET_ID_ix_second_offset on event_DATASET_ID using brin (second_offset);
Some of these have hundreds of millions of rows. second_offset is the time of the event and the row is inserted soon, so it closely (but not perfectly) follows the physical order of rows. Rows are never updated or deleted in these tables, only inserted and read. I run queries like this (simplified):
set enable_seqscan = off; 

select *
from event
where dataset_id = 365
    and second_offset = timestamp_to_second_offset('2023-05-09')
They use the index, but are still slow. EXPLAIN ANALYZE shows:
Bitmap Heap Scan on event_365 event  (cost=453.13..2689210.37 rows=1322 width=54) (actual time=40651.983..40651.984 rows=0 loops=1)
  Recheck Cond: ((second_offset >= 405648000) AND (second_offset   Bitmap Index Scan on event_365_ix_second_offset  (cost=0.00..452.80 rows=52893390 width=0) (actual time=73.633..73.634 rows=17629850 loops=1)
        Index Cond: ((second_offset >= 405648000) AND (second_offset = 405648000) AND (second_offset   Bitmap Index Scan on event_365_ix_second_offset  (cost=0.00..594.80 rows=238696656 width=0) (actual time=5.008..5.008 rows=0 loops=1)
        Index Cond: ((second_offset >= 405648000) AND (second_offset <= 405734399))
Planning Time: 0.771 ms
JIT:
  Functions: 6
  Options: Inlining true, Optimization true, Expressions true, Deforming true
  Timing: Generation 1.642 ms, Inlining 0.000 ms, Optimization 0.000 ms, Emission 0.000 ms, Total 1.642 ms
Execution Time: 6.940 ms
So my question is how to ensure the queries always run fast. Should I add a cron job that reindexes all of them nightly? Seems a bit hacky and something like PG should do automatically, doesn't it? pg_stat_user_tables shows that autovacuum never runs on most of these partitions - I don'tk now why. So I don't know if I should instead try to force autovacuum to run somehow. |relid|schemaname|relname|seq_scan|seq_tup_read|idx_scan|idx_tup_fetch|n_tup_ins|n_tup_upd|n_tup_del|n_tup_hot_upd|n_live_tup|n_dead_tup|n_mod_since_analyze|n_ins_since_vacuum|last_vacuum|last_autovacuum|last_analyze|last_autoanalyze|vacuum_count|autovacuum_count|analyze_count|autoanalyze_count| |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-| |8224073|behavior|event_365|26|4773933120|18135903|9046114024|238696656|0|0|0|238696656|0|238696656|238696656|||||0|0|0|0|
EM0 (250 rep)
Jun 28, 2023, 02:59 PM • Last activity: Jun 29, 2023, 03:04 PM
Showing page 1 of 20 total questions