Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
2 answers
193 views
Can I get results from a Postgres query that I got disconnected from?
I started an hours-long Postgres query in my `psql` shell. Then, I got disconnected from the network and had to make a new `psql` shell. If I look in `pg_stat_activity`, I see that my query is still running. Can I connect to that query and get its output?
I started an hours-long Postgres query in my psql shell. Then, I got disconnected from the network and had to make a new psql shell. If I look in pg_stat_activity, I see that my query is still running. Can I connect to that query and get its output?
davidvgalbraith (121 rep)
Sep 12, 2016, 04:54 PM • Last activity: Jun 27, 2025, 08:06 AM
3 votes
2 answers
52 views
psql subshell \! command - how to get the effect of variable interpolation or backquote expansion
The psql manual for the subshell command "\\!" says > Unlike most other meta-commands, the entire remainder of the line is > always taken to be the argument(s) of \\!, and neither variable > interpolation nor backquote expansion are performed in the arguments. > The rest of the line is simply passed...
The psql manual for the subshell command "\\!" says > Unlike most other meta-commands, the entire remainder of the line is > always taken to be the argument(s) of \\!, and neither variable > interpolation nor backquote expansion are performed in the arguments. > The rest of the line is simply passed literally to the shell. So it seems like it's impossible to substitute any parameters at all - the command string must be a literal. I really need to include the port number as one of the arguments to my shell command. Has anyone found a trick to get around this limitation?
sevzas (373 rep)
Jun 16, 2025, 06:33 PM • Last activity: Jun 19, 2025, 07:17 PM
3 votes
1 answers
48 views
Postgres: limiting serialization errors on serializable transactions
Recently I learnt that serializable isolation level in Postgres is based on optimistic locking, hence theoretically can be performed fully concurrently as long as transactions do not interfere with each other (e.g., not doing read-then-write operations on the same rows). However, in practice, algori...
Recently I learnt that serializable isolation level in Postgres is based on optimistic locking, hence theoretically can be performed fully concurrently as long as transactions do not interfere with each other (e.g., not doing read-then-write operations on the same rows). However, in practice, algorithm for detecting such interferences may produce false positives. As written in the docs , row-level locks can be promoted to page-level locks, if it's preferred from resources usage point of view. This increases chances of getting serialization error. For example, when I try to update two different rows concurrently, and it turns out that those two rows are stored on the same page and that both transactions acquired page-level lock, the one that commits later will get serialization error. I was trying to address this by increasing max_pred_locks_per_transaction, max_pred_locks_per_relation and max_pred_locks_per_page, but no matter how big those values are, I still get the error. For instance, let's take a look at an example that simulates 1k concurrent, independent money transfer operations. With the following config:
enable_seqscan = off
max_locks_per_transaction = 4192
max_pred_locks_per_transaction = 4192
max_pred_locks_per_relation = 4192
max_pred_locks_per_page = 4192
Having the following table:
create table if not exists accounts (
    id bigserial primary key,
    balance numeric(9, 2) not null
    );
When I execute the following queries:
session #1> begin isolation level serializable;
    session #2> begin isolation level serializable;
    session #1> select id, balance from accounts where id in (select n from generate_series(1,1000,4) n); -- select from accts with id=1, 5, 9, ..., 997
    session #1> select id, balance from accounts where id in (select n+1 from generate_series(1,1000,4) n); --- select from accts with id=2, 6, 10, ..., 998
    session #2> select id, balance from accounts where id in (select n from generate_series(3,1000,4) n); --- select from accts with id=3, 7, 11, ..., 999
    session #2> select id, balance from accounts where id in (select n+1 from generate_series(3,1000,4) n); --- select from accts with id=4, 8, 12, ..., 1000
    session #3> select locktype, count(*) from pg_locks group by locktype;
    session #1>  update accounts set balance = 50 where id in (select n from generate_series(1,1000,4) n);
    session #1>  update accounts set balance = 50 where id in (select n+1 from generate_series(1,1000,4) n);
    session #2>  update accounts set balance = 50 where id in (select n from generate_series(3,1000,4) n);
    session #2>  update accounts set balance = 50 where id in (select n+1 from generate_series(3,1000,4) n);
    session #1> commit;
    session #2> commit;
Commit of transaction in session #2 gets rejected:
ERROR:  could not serialize access due to read/write dependencies among transactions
    DETAIL:  Reason code: Canceled on identification as a pivot, during commit attempt.
    HINT:  The transaction might succeed if retried.
And select on pg_locks (executed in the middle on session #3) returns the following results:
developer=# select locktype, count(*) from pg_locks group by locktype;
       locktype    | count 
    ---------------+-------
     page          |    14
     virtualxid    |     3
     tuple         |  1750
     transactionid |     1
     relation      |     7
    (5 rows)
There are 14 page pred locks, even though only 1750 tuple pred locks were acquired, meaning there was still room to allocate more tuple-level locks. I understand that in certain cases, tuple lock got promoted to page lock, and as a database user, I must be prepared to retry such transactions. Nonetheless it increases response time, and I'm wondering if it's possible to somehow setup the DB so that, for instance, in case of 1k concurrent updates, the DB would still use tuple-level locks and not go for page-level locks. Is it required to adjust some other configurations to achieve that? Thanks in advance!
Dawid Kałuża (33 rep)
Jun 16, 2025, 03:11 PM • Last activity: Jun 17, 2025, 06:22 AM
3 votes
0 answers
1020 views
Use or include external schema in search_path on Redshift
# Narrative I have a sql script that creates a bunch of tables in a temporary schema name in Redshift. I don't want to repeat the schema name a bunch of times, so I would like to do something like the following at the top of the script: use long_external_schema_name; My understanding is that in Reds...
# Narrative I have a sql script that creates a bunch of tables in a temporary schema name in Redshift. I don't want to repeat the schema name a bunch of times, so I would like to do something like the following at the top of the script: use long_external_schema_name; My understanding is that in Redshift (inheriting from Postgres), you would do: set search_path to '$user', public, long_external_schema_name; However, I get the following error: ERROR: External schema "long_external_schema_name" cannot be set in search_path Because, it is an **external schema**. # Question Is there any equivalent way that I could stay DRY and write the external schema name only once while I create a bunch of tables in it? # More Context Note, I know I have lots of options in bash (arguments, sed, vim, etc) to replace the schema name in the script, but I'm trying to do something more native to Redshift / psql.
combinatorist (233 rep)
Aug 20, 2019, 08:03 PM • Last activity: Jun 5, 2025, 06:34 PM
3 votes
2 answers
6202 views
How do you use psql client to connect to a postgresql ipv6 host?
# postgresql.conf listen_addresses='::' and # pg_hba.conf hostssl webdb webserver ::0/0 cert The postgresql server is running on docker with pingable ipv6 address of `"GlobalIPv6Address": "fe80::242:ac12:2"` - so no firewalls obstructing. I am using the following command to connect psql --command="s...
# postgresql.conf listen_addresses='::' and # pg_hba.conf hostssl webdb webserver ::0/0 cert The postgresql server is running on docker with pingable ipv6 address of "GlobalIPv6Address": "fe80::242:ac12:2" - so no firewalls obstructing. I am using the following command to connect psql --command="select * from test;" -d webdb -h fe80::242:ac12:2 -p 5432 -U postgres psql: could not connect to server: Invalid argument Is the server running on host "fe80::242:ac12:2" and accepting TCP/IP connections on port 5432? Why is the host not recognized? Is it not possible to use ipv6 with psql? Also, I did not find an ssl parameter option in psql.
Srikanth (161 rep)
Aug 27, 2016, 11:25 AM • Last activity: Apr 25, 2025, 11:51 AM
0 votes
1 answers
421 views
What is the meaning/benefit of this command: export PGOPTIONS="-P"
I see https://www.postgresql.org/docs/current/sql-reindex.html has ```bash $ export PGOPTIONS="-P" $ psql broken_db ... broken_db=> REINDEX DATABASE broken_db; broken_db=> \q ``` I see the below content, but I still did not understand. > Alternatively, a regular server session can be started with `-...
I see https://www.postgresql.org/docs/current/sql-reindex.html has
$ export PGOPTIONS="-P"
$ psql broken_db
...
broken_db=> REINDEX DATABASE broken_db;
broken_db=> \q
I see the below content, but I still did not understand. > Alternatively, a regular server session can be started with -P > included in its command line options. The method for doing this varies > across clients, but in all libpq-based clients, it is possible to set > the PGOPTIONS environment variable to -P before starting the client. > Note that while this method does not require locking out other > clients, it might still be wise to prevent other users from connecting > to the damaged database until repairs have been completed. What is the meaning/benefit of this command: export PGOPTIONS="-P"
David Lapetina (219 rep)
Jul 22, 2022, 01:07 PM • Last activity: Apr 13, 2025, 08:03 AM
0 votes
1 answers
102 views
psql meta-command \d is really slow
My basic question is: what affects the performance of `\d`? I have a separate (small) schema that I manage on a database server that I have otherwise no higher privileges on. The server holds a huge (billion+ rows) database in its public schema. My separate schema provides some auxiliary information...
My basic question is: what affects the performance of \d? I have a separate (small) schema that I manage on a database server that I have otherwise no higher privileges on. The server holds a huge (billion+ rows) database in its public schema. My separate schema provides some auxiliary information used only for webpage rendering of the data in the public schema. *Edit*: It's actually 5.5 billion+ rows, and that's just in one of the biggest tables (and only took 1.5 hours to finish the count(*)!). When I do a simple \d at the psql prompt, it is really slow - on the order of 6.5 seconds when I time it. Clearly, the database/database server has heavy load, but should I bring this \d performance to the attention of the sysadmins (who generally don't want to be bothered)? (Does it imply something markedly wrong in the system?) Everything else in my schema runs on the order of milliseconds, so it's not impacting _my_ schema's performance *per se*. Regardless of whether I bring it to their attention, what impacts the performance of \d? The public schema has 985 tables. Even so, it seems like it would be a fairly simple query; I would expect it's only looking at table names, not table contents.
Randall (385 rep)
Sep 21, 2023, 06:59 PM • Last activity: Apr 1, 2025, 07:07 PM
0 votes
1 answers
396 views
Postgres Upgrading Errors due to table not existing
I am trying to do an upgrade from Postgres 9.6 to 12, I have done this in the past but in this instance I am having difficulty. When running the pg_upgrade command followed by the parameters: ``` /usr/pgsql-12/bin/pg_upgrade --old-bindir /usr/pgsql-9.6/bin --new-bindir /usr/pgsql-12/bin --old-datadi...
I am trying to do an upgrade from Postgres 9.6 to 12, I have done this in the past but in this instance I am having difficulty. When running the pg_upgrade command followed by the parameters:
/usr/pgsql-12/bin/pg_upgrade --old-bindir /usr/pgsql-9.6/bin --new-bindir /usr/pgsql-12/bin --old-datadir /var/lib/pgsql/9.6/data --new-datadir /var/lib/pgsql/12/data --link
On doing a check prior nothing comes up as an issue, when i remove the check flag and run I get the following:
command: "/usr/pgsql-12/bin/pg_dump" --host /var/lib/pgsql --port 50432 --username postgres --schema-only --quote-all-identifiers --binary-upgrade --format=custom  --file="pg_upgrade_dump_2554867.custom" 'dbname=titan' >> "pg_upgrade_dump_2554867.log" 2>&1
pg_dump: error: query failed: ERROR:  relation "staging.stg_fct_game_stats" does not exist
pg_dump: error: query was: LOCK TABLE "staging"."stg_fct_game_stats" IN ACCESS SHARE MODE
What is odd about this is when I look at the tables in this schema after setting my search_path. I run \dt and cannot see this table in that schema. I then run:
SELECT n.nspname AS schemaname, c.relname, c.relkind
FROM   pg_class c
JOIN   pg_namespace n ON n.oid = c.relnamespace
WHERE  relname like '%stg_fct_game_stats';
This returns a result:
schemaname	relname	relkind
staging	stg_fct_game_stats	r
If I remove the like and have it as = 'stg_fct_game_stats' it would not appear. I suspect there is some corruption but I cannot even drop the table as it cannot see this table? Any ideas, I am happy to remove the table but errors:
DROP TABLE staging.stg_fct_game_stats;
ERROR:  table "stg_fct_game_stats" does not exist
Any help is much appreciated.
rdbmsNoob (459 rep)
Nov 22, 2021, 12:48 PM • Last activity: Mar 20, 2025, 04:04 PM
2 votes
1 answers
288 views
Finding execution time of a psql query (without any connection latency/overhead)
How do I find the execution time of a query? I tried these methods: ``` pgbench - pgbench -n -t 1 -f ./query.sql ``` got: ```lang-none latency average = 9.787 ms ' ``` got: ```lang-none total_exec_time -------------------- 12.242579000000001 (1 row) ``` --- Using `EXPLAIN ANALYZE`: ```lang-none QUER...
How do I find the execution time of a query? I tried these methods:
pgbench - pgbench -n -t 1 -f ./query.sql
got:
-none
latency average = 9.787 ms '
got:
-none
  total_exec_time   
--------------------
 12.242579000000001
(1 row)
--- Using EXPLAIN ANALYZE:
-none
                                           QUERY PLAN                                           
------------------------------------------------------------------------------------------------
   (cost=0.00..0.01 rows=0 width=0) (actual time=0.182..0.182 rows=0 loops=1)
   ->  Result  (cost=0.00..0.01 rows=1 width=230) (actual time=0.034..0.034 rows=1 loops=1)
 Planning Time: 0.021 ms
 Execution Time: 0.195 ms
(4 rows)
--- Now, which of those gives me the actual execution time of the query? Or rather, how can I find out the actual execution time?
Prasanna (21 rep)
Mar 13, 2025, 06:29 AM • Last activity: Mar 13, 2025, 11:15 AM
1 votes
1 answers
821 views
Create user mapping without password how to configure authentication
I am trying to create a user mapping in PostgreSQL without a password, but I am encountering an error that says. local_db=> select * from employee; ERROR: could not connect to server "testmachine02" DETAIL: connection to server at "192.168.56.10", port 5432 failed: fe_sendauth: no password supplied...
I am trying to create a user mapping in PostgreSQL without a password, but I am encountering an error that says. local_db=> select * from employee; ERROR: could not connect to server "testmachine02" DETAIL: connection to server at "192.168.56.10", port 5432 failed: fe_sendauth: no password supplied Here is the command that I used to create the user mapping: CREATE USER MAPPING for app_user SERVER testmachine02 OPTIONS (password_required 'false'); I also created a pgpass file under /root/.pgpass with the following entries: localhost:5432:local_db:app_user:app_user123 192.168.56.10:5432:admin:admin:admin123 192.168.56.10:5432:remote_db:test:test123 Despite these steps, I am still unable to access the table without a password. How can I create a user mapping without a password and access the table?
Aymen Rahal (11 rep)
Feb 18, 2023, 08:26 PM • Last activity: Mar 11, 2025, 01:02 PM
15 votes
7 answers
22397 views
How to conditionally stop a psql script (based on a variable value)?
Let's consider the following example (from the start of a psql script): \c :db_to_run_on TRUNCATE the_most_important_table; -- tried to avoid similarities to anything that exists out there Now if it is run this by the command psql [connection details] -v db_to_run_on=\'dev_database\' then it just ru...
Let's consider the following example (from the start of a psql script): \c :db_to_run_on TRUNCATE the_most_important_table; -- tried to avoid similarities to anything that exists out there Now if it is run this by the command psql [connection details] -v db_to_run_on=\'dev_database\' then it just runs and the user is happy. But what if (s)he decides to specify -v db_to_run_on=production_database? (Let's assume that this can happen, just like people run rm -rf / # don't try this at home!!! ocassionally.) Hopefully there is a fresh backup of that table... So the question arises: how to check the variables passed to a script and stop further processing based on their value?
András Váczi (31798 rep)
Sep 19, 2012, 08:32 AM • Last activity: Mar 4, 2025, 10:57 PM
0 votes
1 answers
218 views
String search is very slow with OR statement
I have the following tables: `product`, `product_stock`, and `product_offer` I need to optimize the search when looking for a product that is available in product_stock or that is currently on offer. First, when I add one subplan, things look ok explain select * from product p where p.product_number...
I have the following tables: product, product_stock, and product_offer I need to optimize the search when looking for a product that is available in product_stock or that is currently on offer. First, when I add one subplan, things look ok explain select * from product p where p.product_number like '%T%' and p.id in (select c.product_id from product_stock c where c.quantity > 0) This gives me the following feedback: Gather (cost=1000.42..36876.20 rows=22531 width=53) Which is very good for a table over 1m records However, when I add an OR statement as following: explain select * from product p where (p.product_number like '%T%') and (p.id in (select c.product_id from product_stock c where c.quantity > 0) or p.id in (select product_id from product_offer where now() between offer_start and offer_end) ) This results in a very slow query as following: Gather (cost=313039.34..166666168.02 rows=31023 width=53) Workers Planned: 2 -> Parallel Bitmap Heap Scan on product b (cost=312039.34..166662065.72 rows=12926 width=53) Recheck Cond: ((product_number)::text ~~ '%T%'::text) Filter: ((SubPlan 1) OR (hashed SubPlan 2)) -> Bitmap Index Scan on product_trgm_gin (cost=0.00..310574.23 rows=41364 width=0) Index Cond: ((product_number)::text ~~ '%T%'::text) SubPlan 1 -> Materialize (cost=0.00..17907.26 rows=558188 width=8) -> Seq Scan on product_stock c (cost=0.00..12935.32 rows=558188 width=8) Filter: (quantity > 0) SubPlan 2 -> Seq Scan on product_offer (cost=0.00..1308.68 rows=59468 width=4)
fareed (127 rep)
Dec 15, 2020, 02:17 AM • Last activity: Feb 28, 2025, 08:58 AM
9 votes
8 answers
5609 views
Reverse Byte-Order of a postgres bytea field
I'm currently working on a table that contains hashes, stored in bytea format. Converting the hashes to hex-strings however yields the wrong order of bytes. Example: SELECT encode(hash, 'hex') FROM mytable LIMIT 1; Output: 1a6ee4de86143e81 Expected: 813e1486dee46e1a Is there a way to reverse the ord...
I'm currently working on a table that contains hashes, stored in bytea format. Converting the hashes to hex-strings however yields the wrong order of bytes. Example: SELECT encode(hash, 'hex') FROM mytable LIMIT 1; Output: 1a6ee4de86143e81 Expected: 813e1486dee46e1a Is there a way to reverse the order of bytes for all entries?
R. Martin (123 rep)
Nov 29, 2016, 07:37 PM • Last activity: Feb 14, 2025, 05:04 AM
1 votes
1 answers
6611 views
PostgreSQL: managing the postmaster PID
I am running PostgreSQL on my localhost on MacOS. Every once in a while (~ every 5 times I reboot the computer), I face the following error when I try to connect to my local database instance: could not connect to server: Connection refused Is the server running on host "localhost" (::1) and accepti...
I am running PostgreSQL on my localhost on MacOS. Every once in a while (~ every 5 times I reboot the computer), I face the following error when I try to connect to my local database instance: could not connect to server: Connection refused Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432? could not connect to server: Connection refused Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432? When this happens, I have gotten used to the practice of opening the terminal, then running rm -f /usr/local/var/postgres/postmaster.pid which allows me to connect to the database. According to the documentation , having an old .PID file in the data directory "confuses" postgres and so it must be removed. My question is, why must be this done intermittently and can one automate the removal of old .pid files ? Please note that brew services restart postgres does not resolve the issue - only the removal of the old .pid file works. Any advice here would be greatly appreciated!
iskandarblue (219 rep)
Apr 5, 2021, 02:27 AM • Last activity: Jan 23, 2025, 07:02 PM
219 votes
6 answers
329733 views
How to get the name of the current database from within PostgreSQL?
Using `\c ` in PostgreSQL will connect to the named database. How can the name of the current database be determined? Entering: my_db> current_database(); produces: ERROR: syntax error at or near "current_database" LINE 1: current_database();
Using \c in PostgreSQL will connect to the named database. How can the name of the current database be determined? Entering: my_db> current_database(); produces: ERROR: syntax error at or near "current_database" LINE 1: current_database();
Amelio Vazquez-Reina (2315 rep)
Feb 5, 2014, 09:16 PM • Last activity: Jan 15, 2025, 09:28 AM
0 votes
1 answers
158 views
Postgres remove FKey without lock
Edited : I need to **remove foreign key constraints** from some PostgreSQL tables. However, these tables are **heavily used** and contain a significant **volume of data**. Some are **partitioned**, while others **are not**. When I **try to alter** the table to drop the foreign key, it **causes a loc...
Edited : I need to **remove foreign key constraints** from some PostgreSQL tables. However, these tables are **heavily used** and contain a significant **volume of data**. Some are **partitioned**, while others **are not**. When I **try to alter** the table to drop the foreign key, it **causes a lock**, which interrupts ongoing operations or is locked due to some write operation unless we schedule downtime. Is there a way to remove foreign key constraints from a table in PostgreSQL without causing locks or impacting ongoing operations? Any suggestions or best practices would be appreciated. PS : I do think that without locks it could cause issues with data consistency, integrity, and concurrency control.
Preet Govind (3 rep)
Jan 6, 2025, 10:25 AM • Last activity: Jan 8, 2025, 09:44 AM
1 votes
1 answers
132 views
Why listen_addresses not changing in PostgreSQL?
I'm trying to ENABLE remote client access on my PostgreSQL 12 database (Ubuntu 20.04 LTS). I've changed **```postgresql.conf```** on both paths ***/var/lib/postgresql/12/main*** and ***/etc/postgresql/12/main*** this way: listen_addresses = '*' Of course also add my ip in **```pg_hba.conf```** I've...
I'm trying to ENABLE remote client access on my PostgreSQL 12 database (Ubuntu 20.04 LTS). I've changed **
.conf
** on both paths ***/var/lib/postgresql/12/main*** and ***/etc/postgresql/12/main*** this way: listen_addresses = '*' Of course also add my ip in **
.conf
** I've also tried: root@localhost:/# sudo -u postgres psql psql (12.20 (Ubuntu 12.20-0ubuntu0.20.04.1)) Type "help" for help. postgres=# ALTER SYSTEM SET LISTEN_ADDRESSES='*'; ALTER SYSTEM And after many restarts (of postgreSQL and even whole vnc) anyway I'm getting this: root@localhost:/# sudo -u postgres psql -c 'SHOW listen_addresses;' listen_addresses ------------------ localhost (1 row) And **pgAdmin 4** on my PC also tell me: > Unable to connect to server: connection timeout expired I even tried to reinstall postgreSQL - no effect 😭 Why so? What is wrong? Colleagues, please give me an advice how to solve this problem? **UPD:** Also tried with inactive firewall - no effect...
Aeverandi (11 rep)
Oct 3, 2024, 02:13 PM • Last activity: Dec 2, 2024, 01:48 PM
2 votes
3 answers
313 views
Deleting multiples that aren't exact PostgreSQL
I think the issue came when inserting my data, some of the data looks like this: | id | year | | -- | ---- | | 1 | 2000 | | 1 | 2001 | | 1 | 2002 | | 2 | 2000 | | 2 | 2001 | | 2 | 2002 | | 3 | 2000 | | 3 | 2001 | | 3 | 2002 | Where the intention was for it to look like this: | id | year | | -- | ---...
I think the issue came when inserting my data, some of the data looks like this: | id | year | | -- | ---- | | 1 | 2000 | | 1 | 2001 | | 1 | 2002 | | 2 | 2000 | | 2 | 2001 | | 2 | 2002 | | 3 | 2000 | | 3 | 2001 | | 3 | 2002 | Where the intention was for it to look like this: | id | year | | -- | ---- | | 1 | 2000 | | 2 | 2001 | | 3 | 2002 | Is there a way to fix this without re-doing the table? Or should I delete it and start over?
Kalena Archer (21 rep)
Nov 19, 2024, 05:30 AM • Last activity: Nov 19, 2024, 02:37 PM
0 votes
1 answers
259 views
Get the size of all PostgreSQL DB
I would like to know all the sizes of all the PostgreSQL DB running on a particular system. For example, in a system I have installed PostgreSQL and created multiple databases(mydb_1, mydb_2, mydb_3...mydb_N) in that. SELECT pg_database_size('db_name'); The above only giving me size of the specific...
I would like to know all the sizes of all the PostgreSQL DB running on a particular system. For example, in a system I have installed PostgreSQL and created multiple databases(mydb_1, mydb_2, mydb_3...mydb_N) in that. SELECT pg_database_size('db_name'); The above only giving me size of the specific db passed in the sql as parameter. Is it possible to get the sizes of all the databases in the system using single SQL statement.
subhasis (1 rep)
Nov 15, 2024, 06:18 PM • Last activity: Nov 15, 2024, 08:07 PM
1 votes
1 answers
122 views
Where is the language of psql system messages defined?
I am installing postgresql 16 on a Japanese version of windows. I do not use any wsl. `C:\Program Files\PostgreSQL\16\data\postgresql.conf` All settings such as `lc_messages` in the configuration file are specified with C. The only timezone specified is Japan. ``` timezone = 'Asia/Tokyo' lc_messages...
I am installing postgresql 16 on a Japanese version of windows. I do not use any wsl. C:\Program Files\PostgreSQL\16\data\postgresql.conf All settings such as lc_messages in the configuration file are specified with C. The only timezone specified is Japan.
timezone = 'Asia/Tokyo'
lc_messages = C # locale for system error message
lc_monetary = C # locale for monetary formatting
lc_numeric = C # locale for number formatting
lc_time = C # locale for time formatting
No special environment variables are specified for the terminal. That is, LC_MESSAGES is not specified. If you connect to the postgresql server on localhost with psql in this state, you will see the system messages in Japanese. The red frame is in Japanese. enter image description here In the same environment, if the environment variable LC_MESSAGES=C is specified in the terminal, the system messages will be in English. Note that the message is in English from the message before entering the correct password. enter image description here Where is this behavior described in the postgresql help? Is there any way to make the system messages in psql English other than specifying LC_MESSAGES=C in the terminal?
Fushihara (111 rep)
Nov 4, 2024, 11:03 AM • Last activity: Nov 6, 2024, 09:38 AM
Showing page 1 of 20 total questions