Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

31 votes
7 answers
51052 views
PostgreSQL alternative to SQL Server’s `try_cast` function
Microsoft SQL Server has what I consider a remarkably sensible function, `try_cast()` which returns a `null` if the cast is unsuccessful, rather than raising an error. This makes it possible to then use a `CASE` expression or a `coalesce` to fall back on. For example: SELECT coalesce(try_cast(data a...
Microsoft SQL Server has what I consider a remarkably sensible function, try_cast() which returns a null if the cast is unsuccessful, rather than raising an error. This makes it possible to then use a CASE expression or a coalesce to fall back on. For example: SELECT coalesce(try_cast(data as int),0); The question is, does PostgreSQL have something similar? The question is asked to fill in some gaps in my knowledge, but there’s also the general principle that some prefer a less dramatic reaction to some user errors. Returning a null is more easily taken in one's stride in SQL than an error. For example SELECT * FROM data WHERE try_cast(value) IS NOT NULL;. In my experience, user errors are sometimes better handled if there is a plan B.
Manngo (3145 rep)
Apr 14, 2018, 06:20 AM • Last activity: Aug 6, 2025, 04:15 PM
1 votes
1 answers
668 views
Best way to cast a VIEW row type to the underlying TABLE row type
I have a table with an index and a row-level security policy. Due to [this problem](https://dba.stackexchange.com/q/232789/188406) (more details: [1](https://stackoverflow.com/q/48230535/1048572), [2](https://www.postgresql.org/message-id/flat/2811772.0XtDgEdalL@peanuts2), [3](https://stackoverflow....
I have a table with an index and a row-level security policy. Due to [this problem](https://dba.stackexchange.com/q/232789/188406) (more details: (https://stackoverflow.com/q/48230535/1048572) , (https://www.postgresql.org/message-id/flat/2811772.0XtDgEdalL@peanuts2) , (https://stackoverflow.com/q/63008838/1048572) , (https://www.postgresql.org/message-id/flat/CAGrP7a2t+JbeuxpQY+RSvNe4fr3+==UmyimwV0GCD+wcrSSb=w@mail.gmail.com) , (https://stackoverflow.com/q/48230535/1048572)) , the index is not used when the policy applies, which makes my queries unbearably slow. The workaround I am contemplating would be to create a VIEW with security_invoker = false and security_barrier = false. (If I do enable the security_barrier, the query again doesn't use the index). The problem I am facing now is that I cannot just change the queries to use FROM my_view AS example instead of FROM my_table AS example, since some of them use functions that are defined to take the my_table composite type. A simplified example: CREATE TABLE example ( id int, name text, is_visible boolean ); CREATE VIEW test AS SELECT * FROM example WHERE is_visible; CREATE FUNCTION prop(e example) RETURNS text LANGUAGE SQL AS $$ SELECT e.id::text || ': ' || e.name; $$; SELECT e.prop FROM example e; -- works SELECT e.prop FROM test e; -- ERROR: column e.prop does not exist ([online demo](https://dbfiddle.uk/cb0bn3NV)) Now the question is **how to cast the rows to the expected type?** There is [this question](https://dba.stackexchange.com/q/247240/188406) and I also found a way to do this using the ROW constructor, but I'm not certain how good this is: SELECT e.prop FROM (SELECT (ROW(test.*)::example).* FROM test) e; It's nice that I can just use it as a drop-in replacement for the table expression (without changing anything else in the query), and it does work (postgres accepts it and does use my index when I have the respective WHERE clause), but it looks horrible. Are there problems with my approach that I am missing? Is there a better solution?
Bergi (514 rep)
Oct 18, 2022, 11:14 AM • Last activity: Aug 6, 2025, 03:06 PM
1 votes
1 answers
2502 views
pgAdmin 4 Not connected to the server
I've installed Postgresql 11 and pgAdmin4 (version 4.5) and while I'm able to create tables in a new database, if I attempt to SELECT or INSERT into those tables I always get a message 'Not connected to the server or the connection to the server has been closed.' in the "Messages" tab in pgAdmin 4....
I've installed Postgresql 11 and pgAdmin4 (version 4.5) and while I'm able to create tables in a new database, if I attempt to SELECT or INSERT into those tables I always get a message 'Not connected to the server or the connection to the server has been closed.' in the "Messages" tab in pgAdmin 4. If, however, I open the SQL Shell I can execute a SELECT on a table that I've attempted to insert into from pgAdmin and it shows that the rows have been inserted.
itsmatt (133 rep)
Apr 25, 2019, 12:30 PM • Last activity: Aug 6, 2025, 12:05 PM
1 votes
1 answers
2219 views
Postgres SQL on AWS RDS how to grant permission for schema, select on tables to user
I have logged in as superuser and created a readonly user. When I run grant command, it says that schema does not exists I am logging in as root: sql -h myhost_name -U root -d postgres -W grant usage on schema autorsid to readonly ; ERROR: schema "autorsid" does not exist while schema does exists bu...
I have logged in as superuser and created a readonly user. When I run grant command, it says that schema does not exists I am logging in as root: sql -h myhost_name -U root -d postgres -W grant usage on schema autorsid to readonly ; ERROR: schema "autorsid" does not exist while schema does exists but schema owner is not root. Isn't when you are superuser, we should be able to grant permissions for all database/schema/tables etc.
Sanjay (71 rep)
May 29, 2018, 06:19 PM • Last activity: Aug 6, 2025, 10:02 AM
0 votes
1 answers
4892 views
Insert data into multiple tables connected by foreign key PostgreSQL
I have two tables which are linked by a foreign key. ``` CREATE TABLE clients ( id integer GENERATED ALWAYS AS IDENTITY PRIMARY KEY, name character varying, phone character varying, contact text ); ``` ``` CREATE TABLE address ( id integer GENERATED ALWAYS AS IDENTITY PRIMARY KEY, line1 character va...
I have two tables which are linked by a foreign key.
CREATE TABLE clients (
    id integer GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
    name character varying,
    phone character varying,
    contact text
);
CREATE TABLE address (
    id integer GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
    line1 character varying,
    city character varying,
    state text,
    zip character varying,
    uuid integer REFERENCES clients(id) ON DELETE CASCADE ON UPDATE CASCADE DEFERRABLE INITIALLY DEFERRED
);
My lambda server receives data which should be put into both tables. However the queries I am trying to run are inserting a NULL value into the uuid foreign key column of my address table.
BEGIN; 
INSERT INTO clients (name, phone, contact) 
VALUES ('Trevor J','6077730',''); 

INSERT INTO address (line1, city, state, zip) 
VALUES ('82 Haon Street,'Gasto','OH', 51113); 
COMMIT;
What am I missing to get the proper foreign key in my table?
tdammon (115 rep)
Sep 10, 2022, 03:13 PM • Last activity: Aug 6, 2025, 05:00 AM
1 votes
1 answers
3223 views
Getting by list of ids: `unnest() JOIN` vs `= ANY()`
Here I have an array of 42 ids from a table of 800,000 and I want to lookup the rows with those ids. I know of two ways to do this (besides creating a lot of parameters -- that one is harder to write -- assume it's off the table): # unnest() JOIN EXPLAIN ANALYZE SELECT * FROM unnest('{234095,2366560...
Here I have an array of 42 ids from a table of 800,000 and I want to lookup the rows with those ids. I know of two ways to do this (besides creating a lot of parameters -- that one is harder to write -- assume it's off the table): # unnest() JOIN EXPLAIN ANALYZE SELECT * FROM unnest('{234095,2366560,234391,234480,239511,237504,239629,247756,239142,232372,2366563,238511,242756,243396,235301,240990,233823,234537,246556,240453,245295,238909,236157,233724,236514,245384,249595,239953,233381,232180,232242,232990,237354,2366572,232066,242180,239958,239036,2366578,2166609,2197037,236542}'::int[]) AS d(id) JOIN phi.patient AS p ON d.id = p.id; Nested Loop (cost=0.43..345.25 rows=100 width=133) (actual time=0.049..0.326 rows=42 loops=1) -> Function Scan on unnest d (cost=0.00..1.00 rows=100 width=4) (actual time=0.015..0.025 rows=42 loops=1) -> Index Scan using patient_pkey on patient p (cost=0.42..3.44 rows=1 width=129) (actual time=0.006..0.006 rows=1 loops=42) Index Cond: (id = d.id) Planning Time: 0.200 ms Execution Time: 0.374 ms (6 rows) # = ANY() EXPLAIN ANALYZE SELECT * FROM phi.patient WHERE id = ANY('{234095,2366560,234391,234480,239511,237504,239629,247756,239142,232372,2366563,238511,242756,243396,235301,240990,233823,234537,246556,240453,245295,238909,236157,233724,236514,245384,249595,239953,233381,232180,232242,232990,237354,2366572,232066,242180,239958,239036,2366578,2166609,2197037,236542}'); Index Scan using patient_pkey on patient (cost=0.42..119.51 rows=42 width=129) (actual time=0.049..0.258 rows=42 loops=1) Index Cond: (id = ANY ('{234095,2366560,234391,234480,239511,237504,239629,247756,239142,232372,2366563,238511,242756,243396,235301,240990,233823,234537,246556,240453,245295,238909,236157,233724,236514,245384,249595,239953,233381,232180,232242,232990,237354,2366572,232066,242180,239958,239036,2366578,2166609,2197037,236542}'::bigint[])) Planning Time: 0.162 ms Execution Time: 0.295 ms (4 rows) --- In this simple testing and testing with ~700 ids, I see no discernible difference between these approaches. But they produce different query plans. And it past, I remember encountering performance differences (though unfortunately I can't seem to repro them now). Are these two approaches materially different? Is there a reason to prefer one over the other? One thing that I thought of was that unnest can work for a composite key lookup (searching by two indexed fields, not just one).
Paul Draper (800 rep)
Feb 28, 2020, 05:01 PM • Last activity: Aug 6, 2025, 03:10 AM
4 votes
1 answers
153 views
Postgresql: Why is a join or subquery so much slower than a literal in a where clause?
I'm using PostgreSQL 14.17. My database schema has two tables: ``` Table "public.log_records" Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description --------------+--------------------------+-----------+----------+------------------------------+----------...
I'm using PostgreSQL 14.17. My database schema has two tables:
Table "public.log_records"
    Column    |           Type           | Collation | Nullable |           Default            | Storage  | Compression | Stats target | Description
--------------+--------------------------+-----------+----------+------------------------------+----------+-------------+--------------+-------------
 id           | bigint                   |           | not null | generated always as identity | plain    |             |              |
 log_id       | bigint                   |           |          |                              | plain    |             |              |
 payload      | jsonb                    |           | not null |                              | extended |             |              |
 created_at   | timestamp with time zone |           |          | CURRENT_TIMESTAMP            | plain    |             |              |
Indexes:
    "log_records_pkey" PRIMARY KEY, btree (id)
    "idx_log_records_log_created" btree (log_id, created_at)
    "idx_log_records_log_id" btree (log_id)
Foreign-key constraints:
    "log_records_log_id_fkey" FOREIGN KEY (log_id) REFERENCES logs(id)
Access method: heap

      Column      |           Type           | Collation | Nullable |           Default            | Storage  | Compression | Stats target | Description
------------------+--------------------------+-----------+----------+------------------------------+----------+-------------+--------------+-------------
 id               | bigint                   |           | not null | generated always as identity | plain    |             |              |
 name             | character varying(255)   |           | not null |                              | extended |             |              |
 retention_period | integer                  |           | not null |                              | plain    |             |              |
 created_at       | timestamp with time zone |           |          | CURRENT_TIMESTAMP            | plain    |             |              |
Indexes:
    "logs_pkey" PRIMARY KEY, btree (id)
    "idx_logs_id_name" btree (id, name)
    "logs_name_key" UNIQUE CONSTRAINT, btree (name)
Referenced by:
    TABLE "log_records" CONSTRAINT "log_records_log_id_fkey" FOREIGN KEY (log_id) REFERENCES logs(id)
Access method: heap
There are a small number of records in logs and not all of the log_ids are used in the log_records table:
=# select count(*) from logs;
 count
-------
    13
(1 row)

=# select count(*) from log_records;
  count
----------
 14552534
(1 row)

=# select count(distinct log_id) from log_records;
 count
-------
     8
(1 row)
I want to query the log_records table for log records belonging to a named log. However, if I pass the name of a log that has not got any records in the log_records table the query is very slow:
# explain analyze 
SELECT e.id, e.payload, e.created_at, e.headers 
FROM log_records e JOIN logs l ON l.id = e.log_id 
WHERE l.name = 'Log1' 
ORDER BY e.id 
LIMIT 100;
                                                                             QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.43..244.61 rows=100 width=395) (actual time=7371.696..7371.698 rows=0 loops=1)
   ->  Nested Loop  (cost=0.43..2729140.04 rows=1117680 width=395) (actual time=7371.696..7371.697 rows=0 loops=1)
         Join Filter: (e.log_id = l.id)
         Rows Removed by Join Filter: 14552570
         ->  Index Scan using log_records_pkey on log_records e  (cost=0.43..2511191.29 rows=14529839 width=403) (actual time=0.012..4955.006 rows=14552570 loops=1)
         ->  Materialize  (cost=0.00..1.17 rows=1 width=8) (actual time=0.000..0.000 rows=1 loops=14552570)
               ->  Seq Scan on logs l  (cost=0.00..1.16 rows=1 width=8) (actual time=0.005..0.005 rows=1 loops=1)
                     Filter: ((name)::text = 'Log1'::text)
                     Rows Removed by Filter: 12
 Planning Time: 0.233 ms
 Execution Time: 7371.717 ms
(11 rows)
I get a different plan and better but still poor performance for a scalar subquery:
# explain analyze 
SELECT e.id, e.payload, e.created_at, e.headers
FROM log_records e 
WHERE e.log_id = 
  (SELECT id FROM logs WHERE name = 'Log1') 
ORDER BY e.id 
LIMIT 100;
                                                                        QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=1.60..141.86 rows=100 width=395) (actual time=3686.420..3686.421 rows=0 loops=1)
   InitPlan 1 (returns $0)
     ->  Seq Scan on logs  (cost=0.00..1.16 rows=1 width=8) (actual time=0.007..0.008 rows=1 loops=1)
           Filter: ((name)::text = 'Log1'::text)
           Rows Removed by Filter: 12
   ->  Index Scan using log_records_pkey on log_records e  (cost=0.43..2547511.66 rows=1816230 width=395) (actual time=3686.419..3686.419 rows=0 loops=1)
         Filter: (log_id = $0)
         Rows Removed by Filter: 14552594
 Planning Time: 0.318 ms
 Execution Time: 3686.487 ms
(10 rows)
However if I hardcode the log_id as a literal I get good performance:
# explain analyze 
SELECT e.id, e.payload, e.created_at, e.headers 
FROM log_records e 
WHERE e.log_id = 13 
ORDER BY e.id 
LIMIT 100;
                                                                     QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=4.08..4.09 rows=1 width=395) (actual time=0.015..0.015 rows=0 loops=1)
   ->  Sort  (cost=4.08..4.09 rows=1 width=395) (actual time=0.014..0.015 rows=0 loops=1)
         Sort Key: id
         Sort Method: quicksort  Memory: 25kB
         ->  Index Scan using idx_log_records_log_id on log_records e  (cost=0.43..4.07 rows=1 width=395) (actual time=0.011..0.011 rows=0 loops=1)
               Index Cond: (log_id = 13)
 Planning Time: 0.098 ms
 Execution Time: 0.028 ms
(8 rows)
Why can't I get this type of performance from the joins or subquery? Why is the subquery faster than the join when most advice would say don't use subqueries?
willn-cpx (43 rep)
Aug 1, 2025, 10:24 AM • Last activity: Aug 6, 2025, 12:50 AM
1 votes
1 answers
1770 views
Multiple rows vs Single row with JSON
What is the most optimal way to do this? - In TV Show page I need to get all progress from all episodes in that TV Show - In Homepage I need to get the continue_watching - In Episode I need to get progress of the episode. - Some TV Shows have more than 100 episodes Ex: ```none USER_ID | SHOW_ID | LA...
What is the most optimal way to do this? - In TV Show page I need to get all progress from all episodes in that TV Show - In Homepage I need to get the continue_watching - In Episode I need to get progress of the episode. - Some TV Shows have more than 100 episodes Ex:
USER_ID | SHOW_ID | LAST_SEEN_ID | LAST_SEEN_TIME
10      | 5       | 12           | ---

USER_ID | EPISODE_ID | PROGRESS
10      | 10         | 15
10      | 11         | 20
10      | 12         | 95
OR
USER_ID | SHOW_ID | PROGRESS                     | LAST_SEEN_ID | LAST_SEEN_TIME
10      | 5       | {"10":15, "11":20, "12": 95} | 12           | ---
In PosgreSQL I can get the individual progress with: SELECT progress->'10' as progress... I think that the best method is the first but it will create a lot of rows and this could make DB slower?
Angel Vega (11 rep)
Apr 1, 2020, 11:10 PM • Last activity: Aug 5, 2025, 08:04 PM
0 votes
1 answers
2038 views
PostgreSQL get json key name after comparing values of keys
I have table which has a column of type json with name "demographic". I want to compare keys(predefined by me) by their values and the result of that should be the name of the key and not the value which later i will use to save to another column's rows. The json columns that i have and want to comp...
I have table which has a column of type json with name "demographic". I want to compare keys(predefined by me) by their values and the result of that should be the name of the key and not the value which later i will use to save to another column's rows. The json columns that i have and want to compare values looks like this:
{
   "he":{
      "he_vakiy":29384,
      "he_naiset":14803,
      "he_miehet":14581,
      "he_kika":1485,
      "he_0_2":850,
      "he_3_6"
   }
}
I want to compare f.ex "he_naiset" and "he_miehet" and the one that has highest value should return the name of the "key" ; Ive tried something like this: select greatest(demographics->'he'->>'he_miehet',demographics->'he'->>'he_naiset') as greatest from demographics; Which will compare the values and return the highest values. But I don't know how I can get the name of the key. I also tried something like this from: with cte as ( select * , genre.key as genre , row_number() over (partition by id order by value desc) as ord from base.maakunta cross join lateral json_each_text(base.maakunta.demographic->'he') genre ) select genre, value,ord from cte where ord = 1 ; But this one compares all the keys inside the JSON.
eko (101 rep)
Mar 20, 2022, 02:48 PM • Last activity: Aug 5, 2025, 05:03 PM
0 votes
1 answers
67 views
Cannot import a database dump on Postgres 13.14+ while it loads fine in Postgres 13.13
I'm experiencing a problem with loading a PostgreSQL backup file (SQL format). The SQL file has a function that is defined after another function where it's used. PostgreSQL 13.13 can handle such a backup file, while PostgreSQL 13.14 fails to load it: ``` ERROR: function public.label_id_constant() d...
I'm experiencing a problem with loading a PostgreSQL backup file (SQL format). The SQL file has a function that is defined after another function where it's used. PostgreSQL 13.13 can handle such a backup file, while PostgreSQL 13.14 fails to load it:
ERROR:  function public.label_id_constant() does not exist
LINE 1:  SELECT public.uuid_increment($1, public.label_id_constant()...
                                          ^
HINT:  No function matches the given name and argument types. You might need to add explicit type casts.
QUERY:   SELECT public.uuid_increment($1, public.label_id_constant()) 
CONTEXT:  SQL function "label_id" during inlining
I've double-checked if there is SET check_function_bodies = false; in the dump file. I also searched if I could disable the inlining during the dump load, but still no success. I've distilled the dump file into a minimal reproducible example and attached it as a script to this ticket. If anybody experienced anything similar, please help.
#!/bin/env bash

DUMP_FILE=$(mktemp)
trap "rm -f $DUMP_FILE" EXIT

cat - > "$DUMP_FILE" &2

docker run -d \
       --name postgres-13.13 \
       -e POSTGRES_HOST_AUTH_METHOD=trust \
       -p 5432:5432 \
       postgres:13.13


echo "Waiting for postgres to start" >&2
while ! docker exec postgres-13.13 pg_isready -h localhost -U postgres -q; do
    sleep 1
done

cat "$DUMP_FILE" | psql -h localhost -U postgres -v ON_ERROR_STOP=1 --port 5432 -e -1 && echo "******** Success ********" || echo "******** Failure ********"


docker stop postgres-13.13
docker rm postgres-13.13

echo "Testing with postgres 13.14" >&2

docker run -d \
       --name postgres-13.14 \
       -e POSTGRES_HOST_AUTH_METHOD=trust \
       -p 5432:5432 \
       postgres:13.14

echo "Waiting for postgres to start" >&2
while ! docker exec postgres-13.14 pg_isready -h localhost -U postgres -q; do
    sleep 1
done

cat "$DUMP_FILE" | psql -h localhost -U postgres -v ON_ERROR_STOP=1 --port 5432 -e -1 && echo "******** Success ********" || echo "******** Failure ********"

docker stop postgres-13.14
docker rm postgres-13.14
-------- UPD: What I've already tried: Setting SET jit = off; doesn't fix the problem. UPD2: 1. I tried exporting our database using pg_dump, instead of the CloudSQL export API. It gave me the same error. 2. I tried to export the database, load it to 13.13, then export it from 13.13 and load it to 13.14, but the error was the same again. --- UPD: I successfully migrated the DB with the following script: https://paste.ubuntu.com/p/kgGGQzNcgp/ After migrating to PostgreSQL 17.5, the issue persists. If I dump the DB with pg_dump, I cannot load it with the same error.
Renat (101 rep)
Jan 21, 2025, 08:12 AM • Last activity: Aug 5, 2025, 01:46 PM
0 votes
1 answers
727 views
Restore backup global using pgadmin
I created backup global by rightclicking Postgres 15 in pgadmin [![enter image description here][1]][1] It generates `.sql` file. How do i restore the backup global in pgadmin ? I tried searching on google but didnt found the answer [1]: https://i.sstatic.net/9FAZb.png
I created backup global by rightclicking Postgres 15 in pgadmin enter image description here It generates .sql file. How do i restore the backup global in pgadmin ? I tried searching on google but didnt found the answer
LLL (101 rep)
Jun 21, 2023, 10:35 AM • Last activity: Aug 5, 2025, 05:01 AM
0 votes
1 answers
142 views
How can I manually backup a large Azure Flexible Postgres database?
I have a large (3TB+) single database on a Azure Postgres Flexible Server. This is a managed service, so it takes backups automatically and the frequency can be adjusted. As a disaster recovery/emergency option, I'd like to periodically take a snapshot of the database separately from these increment...
I have a large (3TB+) single database on a Azure Postgres Flexible Server. This is a managed service, so it takes backups automatically and the frequency can be adjusted. As a disaster recovery/emergency option, I'd like to periodically take a snapshot of the database separately from these incremental backups and store it somewhere else. I don't need ongoing connectivity for incremental backups (current managed system does that), I just want a periodic backup that'd be restorable to a new server. I would just choose Azure's own Vaulted Backup option, which provides enough separation for this purpose. But according to the docs : >Vaulted backups are supported for server size <= 1 TB. If backup is configured on server size larger than 1 TB, the backup operation fails. So I'm looking for other options. I checked out Barman , which looks like it should be perfect. But the managed Postgres Flexible Server doesn't support ssh/rsync access and I'm getting conflicting info on whether pg_basebackup is supported--seems unlikely, along with other physical replication options. I can't use Azure's own tools to backup my server, it's too big. I can't use external tools, they can't have shell access. The last option is just pg_dump, which in my experience will take days to finish and may need to be performed on a point-in-time-restore so that performance isn't affected. I'd prefer to create and restore from a physical backup rather than logical, so this is a last resort. Is there a better option for a managed server this size than pg_dump?
zach (1 rep)
May 28, 2025, 11:05 PM • Last activity: Aug 5, 2025, 12:06 AM
1 votes
1 answers
139 views
Error "could not reserve shared memory region ... error code 487" on Postgresql 17.5 in Windows
In PG 17.5 on Windows, I'm seeing errors in the log that look like: 2025-06-10 12:53:44.548 EDT [18612] LOG: could not reserve shared memory region (addr=0000022916760000) for child 00000000000014C4: error code 487 2025-06-10 13:09:44.667 EDT [18612] LOG: could not reserve shared memory region (addr...
In PG 17.5 on Windows, I'm seeing errors in the log that look like: 2025-06-10 12:53:44.548 EDT LOG: could not reserve shared memory region (addr=0000022916760000) for child 00000000000014C4: error code 487 2025-06-10 13:09:44.667 EDT LOG: could not reserve shared memory region (addr=0000022916760000) for child 00000000000014A8: error code 487 2025-06-10 13:14:44.797 EDT LOG: could not reserve shared memory region (addr=0000022916760000) for child 00000000000014CC: error code 487 I was able to duplicate the issue fairly easily in two unconnected/unrelated environments. The issue appears to be triggered/exacerbated by higher settings of shared_buffers in postgresql.conf Steps to replicate: 1. Install PG 17.5 on a windows machine from the EDB community release 2. In the default postgresql.conf that is created by the installer, change shared_buffers as follows: On an 8GB laptop (windows 11), I changed shared_buffers to 3GB. On a 768 GB Dell server (Windows Server 2019), changing shared_buffers to 256GB showed the error message quite often but with shared_buffers=128GB it took a couple of minutes for the message to appear. 3. Start the cluster with pg_ctl and watch the output. The higher the setting of shared_buffers, the more frequent the messages appear. On the 8 GB laptop, it took over 30 minutes to see the first message. On the Dell server I was getting those messages sometimes even before the server was ready to accept connections. Notes: 1. the same hardware and Operating system runs PG 13.18 without those error messages. 2. There are no active connections or queries running during my testing I set the logging to Debug5 and here are some associated messages. PID 19048 is the postmaster, not sure what PID 14992 is doing: 2025-06-10 10:11:47.402 EDT [] DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS , xid/subid/cid: 0/1/0 2025-06-10 10:11:47.402 EDT [] DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRES S, xid/subid/cid: 0/1/0 2025-06-10 10:11:47.402 EDT [] DEBUG: postmaster received pmsignal signal 2025-06-10 10:11:47.402 EDT [] DEBUG: mapped win32 error code 2 to 2 2025-06-10 10:11:47.403 EDT [] DEBUG: mapped win32 error code 2 to 2 2025-06-10 10:11:47.406 EDT [] LOG: could not reserve shared memory region (addr=000001A228F90000) for child 0000000000002168: error code 487
sevzas (373 rep)
Jun 10, 2025, 06:06 PM • Last activity: Aug 4, 2025, 11:07 PM
0 votes
1 answers
1227 views
Use ENUM in array of objects
Is it possible to setup an ARRAY column with objects/JSON on a TABLE, forcing a parameter of the objects in the ARRAY to adhere to an ENUM, while keeping that object parameter unique? Data examples: ENUM val1, val2, val3 [{p1: val1, p2: 'something'}, {p1: val2, p2: 'something'}] <-- valid [{p1: val1...
Is it possible to setup an ARRAY column with objects/JSON on a TABLE, forcing a parameter of the objects in the ARRAY to adhere to an ENUM, while keeping that object parameter unique? Data examples: ENUM val1, val2, val3 [{p1: val1, p2: 'something'}, {p1: val2, p2: 'something'}] <-- valid [{p1: val1, p2: 'something'}, {p1: val4, p2: 'something'}] <-- not valid, val4 not an ENUM value [{p1: val1, p2: 'something'}, {p1: val1, p2: 'something else'}] <-- not valid, p1 not unique If it is possible, using PostgreSQL and sequelize, how would I go about it, setting up the column?
David Gustavsson (235 rep)
Sep 12, 2017, 02:34 PM • Last activity: Aug 4, 2025, 10:02 PM
2 votes
1 answers
2256 views
Having a subquery somehow do the same group by as the outer query
i'm pretty decent with SQL but it's not my main experience of expertise. A colleague asked me a stumper of a question the other day. I was wondering if you guys could provide assistance. Basically we're doing an over all count and grouping these counts by days... but we'd also like to kind of subque...
i'm pretty decent with SQL but it's not my main experience of expertise. A colleague asked me a stumper of a question the other day. I was wondering if you guys could provide assistance. Basically we're doing an over all count and grouping these counts by days... but we'd also like to kind of subquery those counts to get a specific type of count. This is what I have so far that's not quite right. select d.start_date, count(a.status), ( select count(checked_ins) as checked_ins from ( select distinct(a.id) as checked_ins from schema1.employee_se_assignment a, schema2.se d where a.se_id= d.id and a.status not in ('unassigned') and d.customer_name in (‘Test Client’) and a.status = 'checked_in' ) src ) from schema1.employee_se_assignment a, schema2.se d where a.se_id= d.id and a.status not in ('unassigned') and d.customer_name in (‘Test Client’) and a.status = 'checked_in' group by d.start_date Order by d.start_date that yields the following results. which is almost what he wants. but it's clearly yielding the total of all of a certain type of status when we want it broken down by day. EG (see below):  g
die Wolken (121 rep)
May 20, 2019, 09:47 AM • Last activity: Aug 4, 2025, 07:05 PM
0 votes
0 answers
24 views
Drizzle migrations don't seem to be repeatable
In our node.js backend we are using drizzle ORM with postgres, but our migrations are failing for some of our customers db's but not for all of them. Also, when I create a new empty database and do the migrations, it first does so successfully, but afterwards it starts failing too.. [![error: compan...
In our node.js backend we are using drizzle ORM with postgres, but our migrations are failing for some of our customers db's but not for all of them. Also, when I create a new empty database and do the migrations, it first does so successfully, but afterwards it starts failing too.. error: company_function_id referenced in foreign key constraint does not exist This is a part of our first migration file: 0000_nosy_mastermind.sql
...

--> statement-breakpoint
CREATE TABLE IF NOT EXISTS "employees" (
	"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
	"user_id" uuid,
	"first_name" varchar(255) NOT NULL,
	"last_name" varchar(255) NOT NULL,
	"company_email" text,
	"telephone" varchar(255),
	"bank_account_number" varchar(255),
	"extra_info" text,
	"date_employment_start" timestamp,
	"date_employment_end" timestamp,
	"company_id" uuid NOT NULL,
	"company_function_id" uuid,
               ....
    CONSTRAINT "employees_id_unique" UNIQUE("id")
);
--> statement-breakpoint
...
--> statement-breakpoint
CREATE TABLE IF NOT EXISTS "company_functions" (
	"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
	"name" varchar(255) NOT NULL,
	"description" text,
	"department_id" uuid,
	"created_at" timestamp DEFAULT now(),
	"updated_at" timestamp DEFAULT now(),
	"company_id" uuid NOT NULL,
	CONSTRAINT "company_functions_id_unique" UNIQUE("id"),
	CONSTRAINT "unique_company_function_name" UNIQUE("company_id","name","department_id")
);
--> statement-breakpoint
...
--> statement-breakpoint
DO $$ BEGIN
 ALTER TABLE "employees" ADD CONSTRAINT "employees_company_function_id_company_functions_id_fk" FOREIGN KEY ("company_function_id") REFERENCES "public"."company_functions"("id") ON DELETE cascade ON UPDATE no action;
EXCEPTION
 WHEN duplicate_object THEN null;
END $$;
--> statement-breakpoint
...
The second part is where the migration fails: docker postgres output showing the error and the cause Now, I'm guessing this failure is due to this part from 0078_concerned_micromacro.sql
DO $$
BEGIN
  ALTER TABLE "employees" DROP COLUMN IF EXISTS "company_function_id";
EXCEPTION
  WHEN undefined_column THEN NULL;
END $$;
But what I find strange is that these sql statements should not be able to run twice. Because the first time it makes sense. But the second time it will not recreate the table (due to the if not exists part). So company_function_id will be gone. but it does tries to add a foreign key constraint on that field that does not exist anymore. How should this be compatible?
michielQaro (1 rep)
Aug 4, 2025, 02:01 PM
0 votes
1 answers
137 views
Postgis distance filter does not work as expected
I have a issue with simple PostGIS query. Here is a query: SELECT id, location, ST_Distance_Sphere(location, ST_MakePoint(21.012228699999998, 52.2296756)) AS distance FROM pins WHERE ST_Distance_Sphere(location, ST_MakePoint(21.012228699999998, 52.2296756)) > 6008000.1836031 ORDER BY distance ASC, i...
I have a issue with simple PostGIS query. Here is a query: SELECT id, location, ST_Distance_Sphere(location, ST_MakePoint(21.012228699999998, 52.2296756)) AS distance FROM pins WHERE ST_Distance_Sphere(location, ST_MakePoint(21.012228699999998, 52.2296756)) > 6008000.1836031 ORDER BY distance ASC, id ASC LIMIT 11; And here is a result: id | location | distance -----+----------------------------------------------------+------------------ 58 | 0101000020E6100000E253B4CB61C1E13FB7ED0CC45974EF3F | 6008000.1836031 162 | 0101000020E6100000DE133C027D95E03F3C15FD84A6BAEF3F | 6008157.46639754 186 | 0101000020E6100000D339D99403BFE33F541C1E242AD5EE3F | 6008257.4084038 55 | 0101000020E6100000B8042A79A500E63F775E82BA1908EE3F | 6008878.66960698 98 | 0101000020E6100000CD48F884CD1BED3FAD51BA15F017EC3F | 6008923.81556693 56 | 0101000020E6100000EECE76EEFE68E63F1681C811377DED3F | 6010325.52640522 195 | 0101000020E6100000EB87B04B4DC0E33F0AEAE22BECADED3F | 6012118.48976013 100 | 0101000020E6100000603039EEF27FD83FE70E98797AAEEF3F | 6012318.05377066 108 | 0101000020E610000045BA52BB3EA0E23F4B96A28081D5ED3F | 6012631.63019378 117 | 0101000020E61000003ABE6BC7FAD9D43F4D1CF4D130EEEF3F | 6013175.7742802 91 | 0101000020E610000093B62A082965ED3F911CF0DA8FBFEA3F | 6013178.70288832 For some reason row with id = 58 is returned and its distance is 6008000.1836031 despite I've used where filter ST_Distance_Sphere(location, ST_MakePoint(21.012228699999998, 52.2296756)) > 6008000.1836031 I guess Postgis calculates distance with greater precision? How I can avoid this situation?
user606521 (1415 rep)
Aug 25, 2016, 08:49 AM • Last activity: Aug 4, 2025, 11:05 AM
0 votes
2 answers
145 views
Point in time Postgres restore
I was trying to do Point in Time Restore, I had my placed by previously backed up WAL file in my directory to where my restore command (/var/lib/postgresql/16/main/pg_wal/wal_archive) in postgres.conf points to, however my WAL files changes have not been applied: listen_addresses = '*' wal_level = r...
I was trying to do Point in Time Restore, I had my placed by previously backed up WAL file in my directory to where my restore command (/var/lib/postgresql/16/main/pg_wal/wal_archive) in postgres.conf points to, however my WAL files changes have not been applied: listen_addresses = '*' wal_level = replica max_connections = 100 archive_mode = on archive_command = 'test ! -f /var/lib/postgresql/16/main/pg_wal/wal_archive/%f && cp %p /var/lib/postgresql/16/main/pg_wal/wal_archive/%f' restore_command = 'cp /var/lib/postgresql/16/main/pg_wal/wal_archive/%f %p' #max_wal_senders = 10 #wal_log_hints = on And I have created empty recovery.signal file, and here is logs from restarting Postgres(my backed up file is named 000000010000000000000039): 2024-02-23 17:25:02.199 UTC LOG: starting PostgreSQL 16.2 (Debian 16.2-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 2024-02-23 17:25:02.199 UTC LOG: listening on IPv4 address "0.0.0.0", port 5432 2024-02-23 17:25:02.199 UTC LOG: listening on IPv6 address "::", port 5432 2024-02-23 17:25:02.200 UTC LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 2024-02-23 17:25:02.204 UTC LOG: database system was shut down at 2024-02-23 17:25:02 UTC cp: cannot stat '/var/lib/postgresql/16/main/pg_wal/wal_archive/00000003.history': No such file or directory 2024-02-23 17:25:02.207 UTC LOG: starting archive recovery 2024-02-23 17:25:02.209 UTC LOG: restored log file "00000002.history" from archive cp: cannot stat '/var/lib/postgresql/16/main/pg_wal/wal_archive/00000002000000000000003A': No such file or directory 2024-02-23 17:25:02.216 UTC LOG: consistent recovery state reached at 0/3A0000A0 2024-02-23 17:25:02.216 UTC LOG: invalid record length at 0/3A0000A0: expected at least 24, got 0 2024-02-23 17:25:02.216 UTC LOG: redo is not required 2024-02-23 17:25:02.217 UTC LOG: database system is ready to accept read-only connections cp: cannot stat '/var/lib/postgresql/16/main/pg_wal/wal_archive/00000002000000000000003A': No such file or directory cp: cannot stat '/var/lib/postgresql/16/main/pg_wal/wal_archive/00000003.history': No such file or directory 2024-02-23 17:25:02.223 UTC LOG: selected new timeline ID: 3 2024-02-23 17:25:02.263 UTC LOG: restored log file "00000002.history" from archive 2024-02-23 17:25:02.264 UTC LOG: archive recovery complete 2024-02-23 17:25:02.265 UTC LOG: checkpoint starting: end-of-recovery immediate wait 2024-02-23 17:25:02.270 UTC LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.001 s, total=0.006 s; sync files=2, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB; lsn=0/3A0000A0, redo lsn=0/3A0000A0 2024-02-23 17:25:02.275 UTC LOG: database system is ready to accept connections
Руслан Пилипюк (1 rep)
Feb 23, 2024, 05:41 PM • Last activity: Aug 4, 2025, 10:04 AM
0 votes
0 answers
9 views
pgAdmin4 Web - Query Tool Frozen/Blank
**Problem:** I've installed the latest web version of pgAdmin4: `9.6`, using the official pgAdmin Docker image: `dpage/pgadmin4`, and all seems to be working fine, except the main tool: "query tool", when I click on "query tool" icon, it just keeps opening blank tabs. (can't even close them) **Tried...
**Problem:** I've installed the latest web version of pgAdmin4: 9.6, using the official pgAdmin Docker image: dpage/pgadmin4, and all seems to be working fine, except the main tool: "query tool", when I click on "query tool" icon, it just keeps opening blank tabs. (can't even close them) **Tried:** - logging off - restarting image - resetting layout - reinstalling 9.6 - trying with 9.5 - trying with 9.1 **Notes:** - I've tested the DB and connection string of the same DB on my Desktop version of pgAdmin(also 9.6) and everything works, including "query tool" (so the DB and connection string are fine...have only issues with this docker web versions) enter image description here
D4LI3N (101 rep)
Aug 4, 2025, 05:36 AM
1 votes
0 answers
31 views
Necessary to include filtered index filter column if value always NULL in PostgreSQL?
My question is about PostgreSQL. I found similar questions for MS SQL server but I don't know if the answers apply here. My table looks like this: scores ====== | ID | UserID | ValidFrom | ValidUntil | ScorePoints | +----+--------+------------+------------+-------------| | 1 | 1 | 2025-08-01 | 2025-...
My question is about PostgreSQL. I found similar questions for MS SQL server but I don't know if the answers apply here. My table looks like this: scores ====== | ID | UserID | ValidFrom | ValidUntil | ScorePoints | +----+--------+------------+------------+-------------| | 1 | 1 | 2025-08-01 | 2025-08-02 | 80 | | 2 | 1 | 2025-08-02 | NULL | 85 | There will be a lot of queries to find the currently valid score for a given UserID (= the row for UserID = ? AND ValidUntil IS NULL). I have a unique index like this: CREATE UNIQUE INDEX ix_uq_scores ON scores ( userid ) WHERE validuntil IS NULL; Or should it be: CREATE UNIQUE INDEX ix_uq_scores ON scores ( userid, validuntil ) WHERE validuntil IS NULL; A query might look like SELECT u.id, u.username, s.scorepoints FROM users u INNER JOIN scores s ON s.userid = u.id AND s.validuntil IS NULL WHERE u.id = 123; My filtered index will only include rows where validuntil **IS** NULL. So do I have to add this constant NULL value to the index tuple?
MrSnrub (181 rep)
Aug 4, 2025, 02:32 AM
Showing page 1 of 20 total questions