Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
146
views
Refreshing development environment from backup that contains FDW in Postgres
Background: We recently added a foreign data wrapper from one of our Postgres (v14) databases to another Postgres (v14) database in a different system. It works fine in our production environment, but now when I try to refresh data in our development environment from a production backup, it fails be...
Background:
We recently added a foreign data wrapper from one of our Postgres (v14) databases to another Postgres (v14) database in a different system. It works fine in our production environment, but now when I try to refresh data in our development environment from a production backup, it fails because our firewall rules don't allow our dev environments to "talk" to prod environments.
I am using pg_dump for a nightly backup and pg_restore to refresh the the development environment.
Question:
Periodically refreshing a development environment from a production backup seems like it would be a pretty common task. How can I configure or script this so it doesn't fail? I need to change both the hostname and password so that my refreshed development database points to and connects to another development database. My only real idea is to use sed on the backup file before doing the restore to change the host (from prod to dev) and to change the password. But, this seems very fragile to me. My only other idea is to use a superuser and not require a password, but I know the security guys are not going to let me do this.
David S
(284 rep)
Feb 2, 2023, 04:28 PM
• Last activity: Jul 30, 2025, 07:05 AM
0
votes
1
answers
158
views
postgres_fdw: from where, is the resource utilized? remote or local?
I have an OLTP database and ETL jobs are running in the same database in the background. I was thinking of separating the OLTP and ETL instances so that resource utilization would be distributed. Basically, the OLTP instance should have fewer or no ETL overhead. The idea is to create foreign tables...
I have an OLTP database and ETL jobs are running in the same database in the background.
I was thinking of separating the OLTP and ETL instances so that resource utilization would be distributed. Basically, the OLTP instance should have fewer or no ETL overhead.
The idea is to create foreign tables on the ETL instance connecting to OLTP remote server using *postgres_fdw*.
I understand that Postgres will fetch chunks of data from the remote server using the cursor.
Can someone please help me if my understanding is right that running a complex query including foreign tables would use resources(RAM,CPU) from the local server? and is the remote server safe from these executions overhead?
And if I am wrong which instance resources would Postgres use to run a complex SQL with joins on foreign tables?
Thanks in advance!
Sajith P Shetty
(312 rep)
Dec 13, 2022, 02:00 PM
• Last activity: Jul 14, 2025, 06:02 AM
0
votes
2
answers
208
views
Using IMPORT FOREIGN SCHEMA with Greenplum database
I have setup a PostgreSQL 12 database and am trying to connect to a Greenplum database in order to create proxy tables. I am able to connect to the Greenplum db, but I get an error when I try to use the IMPORT FOREIGN SCHEMA command. IMPORT FOREIGN SCHEMA remote_schema FROM SERVER "remote_server" IN...
I have setup a PostgreSQL 12 database and am trying to connect to a Greenplum database in order to create proxy tables. I am able to connect to the Greenplum db, but I get an error when I try to use the IMPORT FOREIGN SCHEMA command.
IMPORT FOREIGN SCHEMA remote_schema FROM SERVER "remote_server" INTO schema_test_1;
returns:
ERROR: Greenplum Database does not support REPEATABLE READ transactions. (variable.c:570)
CONTEXT: remote SQL command: START TRANSACTION ISOLATION LEVEL REPEATABLE READ
SQL state: XX000
I read that REPEATABLE READ is not supported in Greenplum and to use SERIALIZE instead. Is there a way to edit the IMPORT FOREIGN SCHEMA command so that I can replace REPEATABLE READ with SERIALIZE?
I am using PGadmin 4.
Update:
I found that I can get commands to work if I write them as complete transactions and include the following before any commands:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
Is there a way to set this as the default value for all transactions going through the Foreign Server?
Jason
(1 rep)
Nov 11, 2019, 03:29 PM
• Last activity: Jun 25, 2025, 04:04 PM
0
votes
1
answers
2774
views
PostgreSQL Foreign Data Wrappers - Simultaneous queries won't finish
We're using foreign data wrappers in a database which points to another server (which is a read-only replica). We run scheduled jobs using python ( more on this here: https://github.com/sqlalchemy/sqlalchemy/discussions/8348 ) and lately we're facing an issue with a specific query (select statement...
We're using foreign data wrappers in a database which points to another server (which is a read-only replica). We run scheduled jobs using python ( more on this here: https://github.com/sqlalchemy/sqlalchemy/discussions/8348 ) and lately we're facing an issue with a specific query (select statement with cte) - this query runs every hour on 10+ workers (python processes) each with their own conditions.
When I run this same query on the original server it takes ~6s, using fdw it's around 2-3 minutes. Since we reached 10+ workers these queries are stuck in an "active" state, I can see them is session manager, and after 20 minutes or so I get the following error:
SSL SYSCALL error: EOF detected
.
(The max connections option is set to 200.)
After a few of the workers fail with this error, the last ones fail with the following:
ERROR:app.services.cikk.main:(psycopg2.errors.ConnectionFailure) SSL SYSCALL error: EOF detected
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: remote SQL command: START TRANSACTION ISOLATION LEVEL REPEATABLE READ
The postgres_fdw
doc says:
> The remote transaction uses SERIALIZABLE isolation level when the local transaction has SERIALIZABLE isolation level; otherwise it uses REPEATABLE READ isolation level.
> [...] That behavior would be expected anyway if the local transaction uses SERIALIZABLE or REPEATABLE READ isolation level, but it might be surprising for a READ COMMITTED local transaction.
This means that the server keeps read and write locks until the end of the transacion, and the read locks are released as soon as the select operation is performed - but it never finishes.
Maybe there's a deadlock (since 10+ queries try to use the same tables on the remote server)? If so how can I overcome this issue? Does this mean I can only make queries "synchronously" using fdw to make this work?
postgres version:
- PostgreSQL 12.10
- (Debian 12.10-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
server keepalive settings:
- tcp_keepalives_idle: 7200
- tcp_keepalives_interval: 75
- tcp_keepalives_count: 9
Thanks for the help in advance!
### UPDATE:
I think I figured it out.
- I had multiple ~3min queries running simultaneously (these queries used the same tables from a foreign server) and they wouldn't finish
- I started these manually to monnitor what's going on using pg_stat_activity
as @jjanes suggested
- What I saw is all of the queries were in an active
state, the wait_event_type
was IO
and the wait_event
was BufFileWrite
- Read into those a little bit to find out what's going on:
- wait_event_type - IO
: The type of event for which the backend is waiting.
- which is pretty self explanatory - and if the value is IO
it means that some IO operation is in progress
- Since the wait_event
was BufFileWrite
I looked into it what it means exactly: Buffered files in PostgreSQL are primarily temporary files. Temporary files are used for sorts and hashes. BufFileWrite is seen when these in memory buffers are filled and written to disk.
- So what could cause this? One site (link down below) says: Large waits on BufFileWrite can indicate that your work_mem setting is too small to capture most of your sorts and hashes in memory and are being dumped to disk.
and Ensure your sorts largely occur in memory rather than writing temp files by using explain analyze on your query ...
- I checked our work_mem
value with show work_mem;
which was 20971kB - I thought it should be enough so looked further
- The clue here for me was the explain analyze
part. I created the foreign server with use_remote_estimate: true
, which means When use_remote_estimate is true, postgres_fdw obtains row count and cost estimates from the remote server
- The solution was to set this property (use_remote_estimate
) to false
and now it seems to be working the way it should.
Useful links:
https://www.postgresql.org/docs/current/monitoring-stats.html
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/apg-waits.iobuffile.html
https://docs.dbmarlin.com/docs/kb/wait-events/postgresql/buffilewrite/
https://www.postgresql.org/docs/current/runtime-config-resource.html
https://www.postgresql.org/docs/current/postgres-fdw.html
K. Anye
(13 rep)
Aug 4, 2022, 02:11 PM
• Last activity: Apr 19, 2025, 09:05 PM
0
votes
1
answers
812
views
PostgreSQL : Refresh a materialized view base on a foreign table
I've got a foreign table from a distant csv host on a external webserver. I need to enhance this table so I create a materialized view to add some extra fields and join with internal data. Making a view not working, I've got a error (probably due to performance issue of distant http server) I can't...
I've got a foreign table from a distant csv host on a external webserver.
I need to enhance this table so I create a materialized view to add some extra fields and join with internal data. Making a view not working, I've got a error (probably due to performance issue of distant http server)
I can't use some bash cron script on the Linux host as IT department not wanted so I need to stay in PostgreSQL capabilities it self.
I've try [this method](https://stackoverflow.com/questions/29437650/how-can-i-ensure-that-a-materialized-view-is-always-up-to-date) but it seems I can't add a trigger on a foreign table.
Any way to refresh materialized view ? (each time distant csv is updated but could be also based on a frequency, for example each 24h)
---
EDIT to be more efficient : How to refresh a materialized view from a foreign table ?
Bad Wolf
(11 rep)
Dec 7, 2021, 01:41 PM
• Last activity: Feb 21, 2025, 02:08 AM
1
votes
2
answers
3045
views
How to re-create or update schema on Postgresql Foreign Data Wrappers?
I had two postgresql database server, each running on separate physical server. One is running version 13.3 (say server A) and the other 9.6 (say server B). I have installed postgresql's foreign data wrapper on server A connected to sever B, and imported schema from server B (say b_adempiere). As I...
I had two postgresql database server, each running on separate physical server. One is running version 13.3 (say server A) and the other 9.6 (say server B).
I have installed postgresql's foreign data wrapper on server A connected to sever B, and imported schema from server B (say b_adempiere).
As I work with server B (add/delete field inside view), Today I noticed that the view inside imported schema on server A, doesn't get updated with schema on server B. I have to delete imported schema on server A, and re-import schema from server B.
DROP SCHEMA b_adempiere CASCADE;
CREATE SCHEMA b_adempiere;
IMPORT FOREIGN SCHEMA adempiere FROM SERVER b_server INTO b_adempiere;
Question: How to automatically re-create/update imported schema on server A, each time we make structure database update on server B ?
Thank you.
Sapta
(11 rep)
Jun 26, 2022, 07:42 AM
• Last activity: Sep 11, 2024, 04:51 PM
0
votes
1
answers
39
views
Does foreign table doubles server storage?
I'm building a SaaS which in one database I will have a large targe (something like 50milions rows), and this table will serve information for all the customers of the system. But, that SaaS will be a multi-tenant / multi-database software, were when a client register in, we will provision his own d...
I'm building a SaaS which in one database I will have a large targe (something like 50milions rows), and this table will serve information for all the customers of the system.
But, that SaaS will be a multi-tenant / multi-database software, were when a client register in, we will provision his own database to access AND create a foreign_table for that large table outside of the database...
Question 1:
Is is the best approach? Creating a foreign table for each customer may result in a large usage of storage of the server?
Question 2:
The original table has kind 10 others indexes (most of them btree), and with the foreign_table for the customers, those indexes will work properly?
David Junior
(3 rep)
Jun 20, 2024, 06:43 PM
• Last activity: Jun 21, 2024, 06:28 AM
0
votes
0
answers
85
views
Configuring Informix to access data on another server
I have inherited an Informix installation that contains a large amount of business logic in the form of procedures. The company also uses SQL Server, MySQL and Postgres for various roles. We're very interested in moving as far away from Informix as possible. Preferably to SQL server or Postgres. Mig...
I have inherited an Informix installation that contains a large amount of business logic in the form of procedures. The company also uses SQL Server, MySQL and Postgres for various roles. We're very interested in moving as far away from Informix as possible. Preferably to SQL server or Postgres. Migrating the data isn't something I'm worried about, but the business logic that is contained in thousands of very complex, Informix-dialect procedures is something that would be prohibitively expensive to migrate.
The idea I have, which may be somewhat unhinged, is to use Informix as an "application server" of sorts -- the business logic can't move, but perhaps the data can. So that would involve not rewriting any of these stored procedures, and leaving them running on Informix, but moving the data so that the tables that they reference can be accessed transparently, even though they reside on a different RDBMS.
MySQL has pluggable storage engines, SQL Server [has linked servers (supporting ODBC)](https://learn.microsoft.com/en-us/sql/relational-databases/linked-servers/linked-servers-database-engine?view=sql-server-ver16) , and there is a PostgreSQL [Foreign Data Wrapper](https://github.com/credativ/informix_fdw) that specifically connects to Informix. However, this would only allow those servers to query Informix... what I want to do is have Informix query any of those servers so that the Informix procedures don't have to be changed, but the data can be moved. So those "foreign data methods" have the right idea but the wrong direction.
The [Informix Documentation here](https://www.ibm.com/docs/en/informix-servers/14.10?topic=database-specifying-object-in-cross-server-query) specifies that a cross-server query can be made, and it mentions ANSI compliance, but I'm inclined to believe that they are referring to foreign servers *running Informix in ANSI-compliant mode* rather than any (mostly?) ANSI compliant database.
There does seem to be a mechanism for "user-defined access methods" which seems a bit like a "foreign data wrapper" in PostgreSQL:
> You can create user-defined primary access methods that provide SQL access to non-relational and other data that does not conform to built-in access methods. For example, a user-defined access method might retrieve data from an external location or manipulate specific data within a smart large object.
However I can't find any open-source (or other) user-defined access methods that will allow me to query a different RDBMS from Informix. It would be a huge help if anyone could point me in the right direction, or can recommend a tool that will correct translate Informix stored procedures to any other dialect (but I know that's a long shot!).
The speed of these queries is less relevant than in most applications because it's mostly done in batch processes, not interactively, so I'm not concerned about the performance hit from requiring Informix to query these other servers.
Hut8
(101 rep)
Apr 9, 2024, 02:35 PM
5
votes
1
answers
3094
views
Rely on .pgpass in CREATE USER MAPPING
I am trying to create a script which creates a postgres-fdw connection between two postgres 9.4 databases. The script (which is checked in under version control), has been relying on pgpass to do other things. Is there any option I can use to request that the password be looked up in pgpass? ... in...
I am trying to create a script which creates a postgres-fdw connection between two postgres 9.4 databases. The script (which is checked in under version control), has been relying on pgpass to do other things. Is there any option I can use to request that the password be looked up in pgpass? ... in general, where is the documentation on what options are available for
CREATE USER MAPPING
? the reference just says that options depend on the server.
shaunc
(151 rep)
Jan 28, 2015, 03:04 AM
• Last activity: Aug 21, 2023, 03:18 PM
1
votes
1
answers
698
views
Set a column value using the (remote) default when inserting a row into a foreign table
I have a database with a table whose primary key is a `serial` column, or a column with a locally-computed default value that prevents conflicts, for instance: CREATE TABLE foo ( foo_id serial PRIMARY KEY, foo_name text ); I also have a second database, where I use the `postgres_fdw` foreign data wr...
I have a database with a table whose primary key is a
serial
column, or a column with a locally-computed default value that prevents conflicts, for instance:
CREATE TABLE foo (
foo_id serial PRIMARY KEY,
foo_name text
);
I also have a second database, where I use the postgres_fdw
foreign data wrapper to access the table in the first database.
I’d like to insert a row in the foreign table in the second database, without specifying a value for the primary key, letting the remote server “choose” a value in a conflict-free way.
Unfortunately, whenever I try to insert data in the foreign table, without selecting the primary key column, postgres_fdw
tries to insert rows with NULL
s for the columns that weren’t selected, without using the server-defined default values. Hence it fails as the primary key is defined NOT NULL
.
As far as I can see, there is not way to use a foreign sequence. I’ve seen that I can define a default value when I create a foreign table but, as I understand it, it is implemented on the local side.
Is there a way I can insert a row in a foreign table and let the foreign server use its default values?
user2233709
(223 rep)
Mar 30, 2023, 10:50 PM
• Last activity: Mar 31, 2023, 12:39 PM
1
votes
0
answers
114
views
PostgreSQL: data retrieval from a private (authentication-based) RESTful web API using a foreign data wrapper
In some cases, one want to feed or populate some [PostgreSQL][2] database tables using data fetched from the web, e.g. through a RESTful API (serving JSON features). I've recently discovered [foreign data wrappers][3] and I'm not yet 100% used to them, but they seem to be a very interesting option t...
In some cases, one want to feed or populate some PostgreSQL database tables using data fetched from the web, e.g. through a RESTful API (serving JSON features).
I've recently discovered foreign data wrappers and I'm not yet 100% used to them, but they seem to be a very interesting option to replace some custom Python scripts that retrieve data from the web, process it and use a Python data wrapper (i.e. psycopg2 , or others) to populate some of my PostgreSQL database tables with that data.
After some research, I've successfully used the
ogr_fdw
(see also here ) for fetching data from an OGC API features endpoint as well as the http
extension for retrieving JSON data from a standard public API. Both those APIs are public, so they don't need any type of authentication.
I'm now naturally wondering if it's possible to build a view or populate a PostgreSQL table using a foreign data wrapper which retrieves data from a *private* RESTful API, i.e. which has an authentication procedure?
If yes, what kind of extension should I use and how? I couldn't find a tutorial or any detailed information at the moment.
The authentications I want to succeed with are the web based ones, i.e.: HTTP Basic Auth , API keys or OAuth2 .
s.k
(424 rep)
Jan 21, 2023, 03:22 PM
5
votes
2
answers
2592
views
postgres_fdw slower then copy + scp + copy (~12 times)
- foreign server 9.2 - local server 9.5 - table is 10GB - data transfer performed on same network interface as foreign server works - no indexes set on data - old way: 1. copy to - 2:36 2. scp - 08:17 3. copy from - 10:11 - postgres_fdw: 1. by the time old way finished it has done 800MB of `insert i...
- foreign server 9.2
- local server 9.5
- table is 10GB
- data transfer performed on same network interface as foreign server works
- no indexes set on data
- old way:
1. copy to - 2:36
2. scp - 08:17
3. copy from - 10:11
- postgres_fdw:
1. by the time old way finished it has done 800MB of
insert into .. select * from foreign_table
Did I miss something in config (meaning I can improve it), or postgres_fdw
is just not meant for bulk load (meaning I can't improve it)?
(I use it for small data amount reconcile and it works fine. The idea of insert select from fdw
instead of running bash commands looked so sweet.)*
I tried psql to remote server from local server and \copy table
- six minutes - faster then over ssh.
The fetch_size
option, not available prior to 9.6, can be mocked up with dblink_fetch(CURSOR, fetch_size)
- see my answer below.
Vao Tsun
(1263 rep)
Feb 23, 2017, 04:11 PM
• Last activity: May 30, 2022, 02:50 PM
0
votes
1
answers
859
views
How I can create a foreign table using a foreign data wrapper that references to a remote one with the same name?
I have 2 databases one named `logs` that has the table: ``` http_log: id: serial method: Varchar(10) url: varchar(150) ``` I also have an another database named `archiving` that also has a table named `http_log`: ``` http_log: id: unsinged integer method: Varchar(10) url: varchar(150) ``` How I can...
I have 2 databases one named
logs
that has the table:
http_log:
id: serial
method: Varchar(10)
url: varchar(150)
I also have an another database named archiving
that also has a table named http_log
:
http_log:
id: unsinged integer
method: Varchar(10)
url: varchar(150)
How I can create the foreign table archived_http_log
so I can transfer data from the http_log
to archived_http_log
. I cannot have a table with the same name in my postgresql therefore I cannot have 2 tables named http_log
.
What I want top achieve is via a single SQL script to transfer data from logs.http_log
to archiving.http_log
. So in a server to run:
INSERT INTO archived_http_log VALUES (SELECT * from http_log);
Dimitrios Desyllas
(873 rep)
Jan 19, 2022, 04:09 PM
• Last activity: Jan 20, 2022, 03:10 AM
3
votes
2
answers
2650
views
ORDER BY too slow in Foreign Table using postgres_fdw
PostgreSQL v9.6, [postgres_fdw][1] **Foreign table** CREATE FOREIGN TABLE user_info ( id bigint , info jsonb ) SERVER server_test_fdw OPTIONS(SCHEMA_NAME 'public', TABLE_NAME 'user_info_raw' ); -- user_info_raw is a large table (100 million records, 200 GB) **Sample data of `info` column** {"key1":...
PostgreSQL v9.6, postgres_fdw
**Foreign table**
CREATE FOREIGN TABLE user_info (
id bigint ,
info jsonb
)
SERVER server_test_fdw OPTIONS(SCHEMA_NAME 'public', TABLE_NAME 'user_info_raw' );
-- user_info_raw is a large table (100 million records, 200 GB)
**Sample data of
info
column**
{"key1": 1, "key2": 0.678}
{"key1": 1, "key2": 1.0}
{"key1": 1, "key2": 0.986}
{"key1": 2, "key2": 0.75}
{"key1": 2, "key2": 0.639}
**Query on foreign table** (*updated*)
SELECT id, info
FROM user_info
WHERE info ->> 'key1'= '1' -- OR using jsonb_extract_path_text(info, 'key1') = '1'
ORDER BY id
LIMIT 10;
Limit (cost=10750829.63..10750829.65 rows=10 width=40) (actual time=550059.320..550059.326 rows=10 loops=1)
-> Sort (cost=10750829.63..10751772.77 rows=377257 width=40) (actual time=550059.318..550059.321 rows=10 loops=1)
Sort Key: id
Sort Method: top-N heapsort Memory: 26kB
-> Foreign Scan on user_info (cost=100.00..10742677.24 rows=377257 width=40) (actual time=1.413..536718.366 rows=68281020 loops=1)
Filter: ((info ->> 'key1'::text) = '1'::text)
Rows Removed by Filter: 7170443
Planning time: 4.097 ms
Execution time: 550059.597 ms
**Query on user_info_raw**
EXPLAIN ANALYSE
SELECT id, info
FROM user_info_raw
WHERE info ->> 'key1'= '1'
ORDER BY id
LIMIT 10;
Limit (cost=0.57..1296.95 rows=10 width=59) (actual time=0.043..0.073 rows=10 loops=1)
-> Index Scan using idx_user_info_raw_info on user_info_raw (cost=0.57..68882850.88 rows=531346 width=59) (actual time=0.042..0.070 rows=10 loops=1)
Filter: ((info ->> 'key1'::text) = '1'::text)
Planning time: 0.192 ms
Execution time: 0.102 ms
select pg_size_pretty(pg_table_size('user_info_raw'));
pg_size_pretty
----------------
223 GB
It takes **10 milliseconds** to execute the query on the **user_info_raw** (*remote server*).
But, It takes **a lot of time** when using theuser_info
foreign table. When I remove ORDER BY id
, the query executes **very fast**.
I think that my query on *foreign table* *should send to the remote server* for executing, but it's not, I don't know why, may be due to of this excerpt from postgres_fdw document
> By default, only WHERE clauses using **built-in operators and functions**
> will be considered for execution on the remote server. Clauses
> involving **non-built-in functions** are checked locally after rows are
> fetched. If such functions are available on the remote server and can
> be relied on to produce the same results as they do locally,
> performance can be improved by sending such WHERE clauses for remote
> execution
**How can I fix the issue related to ORDER BY in foreign table ?**
----------
**UPDATED**
Adding use_remote_estimate
on server
and foreign table
doesn't help.
Luan Huynh
(2010 rep)
Jul 13, 2017, 10:52 AM
• Last activity: Jul 28, 2020, 06:49 PM
0
votes
1
answers
1644
views
Listing active PostgreSQL foreign data wrappers
I have a few foreign data wrappers set up from my main PostgreSQL database to other databases. Is there a table I can query to list all of the FDWs? Unfortunately `select * from information_schema.schemata` doesn't give any hints about which schemas are foreign and which are local.
I have a few foreign data wrappers set up from my main PostgreSQL database to other databases. Is there a table I can query to list all of the FDWs? Unfortunately
select * from information_schema.schemata
doesn't give any hints about which schemas are foreign and which are local.
rcv
(103 rep)
Jul 28, 2020, 04:39 PM
• Last activity: Jul 28, 2020, 05:18 PM
0
votes
1
answers
915
views
Postgres special character rendering
I have an old Sql Server db with a ‘free entry’ field that is being accessed via a Postgres 10.6 foreign table. This field contains a lot of garbage characters (higher than Ascii 128) which Postgres has a difficult time rendering and results in ‘empty looking’ records. I say empty *looking* as I am...
I have an old Sql Server db with a ‘free entry’ field that is being accessed via a Postgres 10.6 foreign table. This field contains a lot of garbage characters (higher than Ascii 128) which Postgres has a difficult time rendering and results in ‘empty looking’ records.
I say empty *looking* as I am able to query those missing records out of the foreign table using the regexp_replace function to substitute spaces for those problematic characters.
This leaves me with two questions:
1) The Postgres database encoding is UTF-8, so shouldn’t that be able to render any characters properly?
2) Since I can query it, the Postgres db is obviously processing this data at some level. Why does it display that entire field as blank until the special characters are replaced?
BigClyde
(1 rep)
May 29, 2019, 03:38 PM
• Last activity: Mar 22, 2020, 10:03 AM
-1
votes
1
answers
461
views
Postgres 12 Scalability using table Partitioning and Foreign Data Wrappers
I have looked through the archives and I cant find any discussions around the following topic. I have a fairly in depth question that I would appreciate some guidance with. **Current environment** * Current Postgres Version: 10 * Os: Ubuntu 14:04 (Soon to upgrade to 18.04) * Harddrive has 2.3 TB max...
I have looked through the archives and I cant find any discussions around the following topic.
I have a fairly in depth question that I would appreciate some guidance with.
**Current environment**
* Current Postgres Version: 10
* Os: Ubuntu 14:04 (Soon to upgrade to 18.04)
* Harddrive has 2.3 TB max space. (Raid 10 SSD's)
* Current Postgres data size: 1.6 TB (Growing at 100gb per month)
* Currently have 1 master database and 2 replicas. (1 upstream and 1 downstream slave using cascading replication)
* 1 warehouse using logical replication
Based on the above, I'm sure its rather obvious that I will be running into some serious issues regarding available disk space within a few months.
Just a couple things to mention before i provide my theoretical long term solution.
Currently cloud based solution is not an option due to costs and complexity
Servers are hosted at an offsite DC and the max possible disk size we can achieve using SSD's in a Raid 10 configuration is 2.3 TB
Currently we are handling load to a reasonable standard. Although that could change as our business grows
**My thoughts on a possible solution**
I need a long term scalable solution and we have been looking into upgrading to Postgres 12. Using the seemingly awesome table partitioning with Foreign data wrappers, could we achieve horizontal scaling if we partition key tables by date? if this is possible, then we could have the current years data on our primary master PostgreSQL server and our yearly partitioned tables on a different server. Therefore alleviating our space issues and achieving long term scalability
The above sounds feasible to me, but how would this affect my replications? I believe any partitioning changes i make on my Master DB, would be "replicated" through to the replications. More importantly, how would this work related to the foreign data wrappers?
**Alternative solutions**
I could move away from using SSD's in order to achieve more space in a raid 10 configuration. (Long term i would still encounter the same issues eventually and my application might pay a performance penalty)
I could use a difference raid configuration to achieve more available space. ( Same long term issues as mentioned above)
I could look to build a manual archiving process that would copy my "cold" data to a different server and delete from the data from master.
Apologies for the long question.
Francois
(11 rep)
Dec 2, 2019, 04:51 PM
• Last activity: Dec 2, 2019, 07:32 PM
1
votes
0
answers
118
views
Querying foreign table with SP-GiST index results in "ERROR: cache lookup failed for type 0"
I'm using PostgreSQL 11.3 (compiled by Visual C++ build 1914, 64-bit) and I want to access a table in another Postgres database using **postgres_fdw** but since the original table has an **SP-GiST** index on a geometry column, the query on the foreign table fails with "ERROR: cache lookup failed for...
I'm using PostgreSQL 11.3 (compiled by Visual C++ build 1914, 64-bit) and I want to access a table in another Postgres database using **postgres_fdw** but since the original table has an **SP-GiST** index on a geometry column, the query on the foreign table fails with "ERROR: cache lookup failed for type 0", even if the table is empty.
Does anyone knows how to query the foreign table without throwing the error and still keeping the SP-GiST index on the remote table?
Here's a sample SQL to reproduce the error:
1. Create remote database:
CREATE DATABASE remote
WITH
OWNER = postgres
ENCODING = 'UTF8'
LC_COLLATE = 'English_United States.1252'
LC_CTYPE = 'English_United States.1252'
TABLESPACE = pg_default
CONNECTION LIMIT = -1;
2. Create local database:
CREATE DATABASE localdb
WITH
OWNER = postgres
ENCODING = 'UTF8'
LC_COLLATE = 'English_United States.1252'
LC_CTYPE = 'English_United States.1252'
TABLESPACE = pg_default
CONNECTION LIMIT = -1;
3. Setup remote database (connect to 'remote' database before running the following SQL):
CREATE EXTENSION postgis;
CREATE EXTENSION postgres_fdw;
CREATE SCHEMA remote;
CREATE TABLE remote.locations
(
id bigserial NOT NULL UNIQUE,
name character varying,
lonlat geometry NOT NULL,
PRIMARY KEY (id)
)
WITH (
OIDS = FALSE
);
CREATE INDEX remote_locations_lonlat_idx
ON remote.locations USING spgist
(lonlat);
4. Setup local database (connect to 'localdb' database before running the following SQL):
CREATE EXTENSION postgis;
CREATE EXTENSION postgres_fdw;
CREATE SCHEMA localdb;
CREATE SERVER remote FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'localhost', dbname 'remote', port '5432', extensions 'postgis');
CREATE USER MAPPING FOR postgres SERVER remote OPTIONS (user 'postgres', password 'YOUR_POSTGRES_USER_PASSWORD');
IMPORT FOREIGN SCHEMA remote FROM SERVER remote INTO localdb OPTIONS (import_not_null 'true');
5. Querying the foreign table...
SELECT * from localdb.locations;
6. ...results in:
ERROR: cache lookup failed for type 0
CONTEXT: remote SQL command: SELECT id, name, lonlat FROM remote.locations
SQL state: XX000
João Ferreira
(11 rep)
Jun 17, 2019, 01:08 PM
0
votes
0
answers
484
views
Permission issue with a view built on top of foreign tables
I am consolidating data from several postgresql databases, all with the same schema, into a new one. To do that I mapped the tables I am interested in the new DB as foreign tables with `postgres_fdw`. Each source DB is mapped in its own schema: `source1`, `source2`, ..., `sourceN`. Then to analyze t...
I am consolidating data from several postgresql databases, all with the same schema, into a new one. To do that I mapped the tables I am interested in the new DB as foreign tables with
postgres_fdw
. Each source DB is mapped in its own schema: source1
, source2
, ..., sourceN
. Then to analyze the data as a whole, I built views on top of the foreign tables. The view definition basically is this:
CREATE OR REPLACE consolidated_view AS
SELECT "source1" AS source, * FROM source1.table UNION ALL
SELECT "source2" AS source, * FROM source2.table UNION ALL
...
SELECT "sourceN" AS source, * FROM sourceN.table ;
I did that, and it worked like a charm for many tables. But one is resisting me. I keep getting this error:
ERROR: permission denied for relation product_alias
CONTEXT: Remote SQL command: SELECT * FROM public.table
But when I try to do a SELECT * FROM sourceX.table
, for each source in 1..N, i get the results I want.
Don't know if I am completely off track here, but what I am not very clear on, and could explain the issue, is the identity under which the remote tables will be accessed throught the view, compared to when I do a SELECT
on an individual foreign table. We have two different users involved: one "admin" who owns the view, one "user" as who will execute the query on the view. Both have as USER MAPPING
defined. I tried to increase the log verbosity, have all sorts of message pop in the console as the query is executed, adding those settings:
client_min_messages debug5
log_min_error_statement debug5
log_min_messages debug5
But did not find much useful information that way. Any help to the least to improve the way I am trying debug this issue would be much appreciated.
Cans
(101 rep)
Jan 27, 2019, 06:34 PM
• Last activity: Jan 27, 2019, 07:32 PM
3
votes
2
answers
1644
views
FDW Permissions error using MD5 encrypted password
When creating Foreign Data Wrappers in Postgres 9.6, if I use the following: CREATE SERVER foreign_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'localhost', dbname 'database', port '5432'); -- permissions ALTER SERVER foreign_server OWNER TO data_owner; -- Allow a user to access the foreig...
When creating Foreign Data Wrappers in Postgres 9.6, if I use the following:
CREATE SERVER foreign_server
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (host 'localhost', dbname 'database', port '5432');
-- permissions
ALTER SERVER foreign_server OWNER TO data_owner;
-- Allow a user to access the foreign server if required
GRANT USAGE ON FOREIGN SERVER foreign_server to data_owner;
-- 4. Create foreign user mapping - using md5 encrypted password
CREATE USER MAPPING FOR data_owner SERVER foreign_server
OPTIONS ( USER 'jeff', PASSWORD 'md5de0366066f8d96ac5bb4872b1d77b0cb!');
-- 5. Create foreign table
IMPORT FOREIGN SCHEMA foo EXCEPT
(bar )
FROM SERVER foreign_server INTO foreign_tables_schema;
Why do I get an error, when I had the password as plain text everything is fine, but when I use the encrypted md5 password I cannot import the foreign tables.
Any thoughts?
John
(33 rep)
Feb 23, 2018, 01:00 PM
• Last activity: Feb 24, 2018, 05:54 PM
Showing page 1 of 20 total questions