Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
2
votes
0
answers
1106
views
Postgres: cache lookup failed for type 5308416
When inserting data into a Postgres (timescaledb hyper)table, the following error occurs approximately every 1-5 seconds. ``` ERROR: cache lookup failed for type 5308416 ``` I'm attempting to find what this type is in order to debug further. I've tried looking in the following places but all return...
When inserting data into a Postgres (timescaledb hyper)table, the following error occurs approximately every 1-5 seconds.
ERROR: cache lookup failed for type 5308416
I'm attempting to find what this type is in order to debug further. I've tried looking in the following places but all return zero results:
select * from pg_class where oid = 5308416;
select * from pg_collation where oid = 5308416;
select * from pg_ts_config where oid = 5308416;
select * from pg_ts_dict where oid = 5308416;
select * from pg_namespace where oid = 5308416;
select * from pg_operator where oid = 5308416;
select * from pg_proc where oid = 5308416;
select * from pg_type where oid = 5308416;
The query I'm running looks as follows:
INSERT INTO "import" AS "T"
("a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "aa", "ab", "ac", "ad", "ae", "af", "ag", "ah", "ai", "aj", "ak", "al", "am", "an", "ao", "ap", "aq", "ar", "as", "at", "au", "av", "aw", "ax", "ay", "az", "ba", "bb")
VALUES
($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15, $16, $17, $18, $19, $20, $21, $22, $23, $24, $25, $26, $27, $28, $29, $30, $31, $32, $33, $34, $35, $36, $37, $38, $39, $40, $41, $42, $43, $44, $45, $46, $47, $48, $49, $50, $51, $52, $53, $54),
($55, $56, $57, $58, $59, $60, $61, $62, $63, $64, $65, $66, $67, $68, $69, $70, $71, $72, $73, $74, $75, $76, $77, $78, $79, $80, $81, $82, $83, $84, $85, $86, $87, $88, $89, $90, $91, $92, $93, $94, $95, $96, $97, $98, $99, $100, $101, $102, $103, $104, $105, $106, $107, $108),
-- omitted 22 statements
($1297, $1298, $1299, $1300, $1301, $1302, $1303, $1304, $1305, $1306, $1307, $1308, $1309, $1310, $1311, $1312, $1313, $1314, $1315, $1316, $1317, $1318, $1319, $1320, $1321, $1322, $1323, $1324, $1325, $1326, $1327, $1328, $1329, $1330, $1331, $1332, $1333, $1334, $1335, $1336, $1337, $1338, $1339, $1340, $1341, $1342, $1343, $1344, $1345, $1346, $1347, $1348, $1349, $1350)
ON CONFLICT ("a", "b", "c", "d") DO NOTHING
When retried, the statement inserts as expected.
Where should I look to find the referenced type 5308416
? This type number is consistent in the logs over several days.
For clarity, this is running from a dotnet 8 host, using npgsql.
Dan
(71 rep)
Dec 27, 2023, 12:18 PM
• Last activity: Dec 31, 2023, 12:46 PM
0
votes
0
answers
111
views
Postgres / Npgsql slow connection due to Hosts file organisation
I'm not really sure if this is the right place to post this but I happened across a peculiar behaviour yesterday. Historically, when debugging locally, our application's first connection to a given database (`foo.local`) has taken quite a long time (i.e. ~5s). I discovered yesterday that changing th...
I'm not really sure if this is the right place to post this but I happened across a peculiar behaviour yesterday.
Historically, when debugging locally, our application's first connection to a given database (
Postgres: PostgreSQL 14.6 on aarch64-apple-darwin20.6.0, compiled by Apple clang version 12.0.5 (clang-1205.0.22.9), 64-bit
Npgsql: 6.0.8 (but also reproduced in 7.0.6)
Platform: Mac Sonoma 14.1.1
foo.local
) has taken quite a long time (i.e. ~5s).
I discovered yesterday that changing the Hosts file from this:
127.0.0.1 localhost
127.0.0.1 foo.local
to
127.0.0.1 localhost foo.local
Has completely removed this delay - now our application initialises almost instantly.
The issue appears to have been some failed DNS lookup, but I can't really figure out exactly what was happening. Was hoping someone could shed some light on this.
I know that postgres has log_hostname
which (I believe) tries to perform a reverse lookup to identify the hostname, and to my knowledge this is not supported in the Hosts file.
However this connection delay occurred regardless of whether log_hostname
was on or off.
¯\_(ツ)_/¯
Not an urgent question since the issue has more or less been resolved, but my curiosity has the better of me. If anyone has any ideas I'd be very appreciative!
Extra info:
Postgres: PostgreSQL 14.6 on aarch64-apple-darwin20.6.0, compiled by Apple clang version 12.0.5 (clang-1205.0.22.9), 64-bit
Npgsql: 6.0.8 (but also reproduced in 7.0.6)
Platform: Mac Sonoma 14.1.1
Jmase
(21 rep)
Nov 14, 2023, 12:28 AM
• Last activity: Nov 14, 2023, 01:44 AM
1
votes
1
answers
209
views
NpgsqlParameter Array of smallints is receiving 0 instead of null value from postgres
I got strange result with NpgsqlParameter in C++/CLI project. ``` NpgsqlParameter^ status = wCMD->Parameters->Add(gcnew NpgsqlParameter("param1", NpgsqlDbType::Array | NpgsqlDbType::Smallint)); status->Direction = System::Data::ParameterDirection::InputOutput; status->Value = DBNull::Value; status->...
I got strange result with NpgsqlParameter in C++/CLI project.
NpgsqlParameter^ status = wCMD->Parameters->Add(gcnew NpgsqlParameter("param1", NpgsqlDbType::Array | NpgsqlDbType::Smallint));
status->Direction = System::Data::ParameterDirection::InputOutput;
status->Value = DBNull::Value;
status->Size = 20;
CALL MySchema.MyProcedure(:param1);
In MyProcedure, I assign a array of null and 0 to param1:
CREATE OR REPLACE PROCEDURE MySchema.MyProcedure(
INOUT param1 smallint[])
LANGUAGE 'plpgsql'
AS $BODY$
BEGIN
param1 := array[null, 0];
raise notice 'param1: %', param1;
$BODY$;
Then I got the following strange result in C++/CLI project: NpgsqlValue with arrays of shorts is [0, 0]
Could somebody help me to explain why NpgsqlParameter received arrays of zeros instead of null and 0?
Maybe NULL in C++ is 0? Or array cannot receive NULL?
How could I receive correct returned result from postgres? Example: NpgsqlValue with arrays of shorts is [NULL, 0]
I am using npgsql 4.1.12, postgresql 14.
Thanks a lot.
Thanh Dong
(9 rep)
May 8, 2023, 11:17 AM
• Last activity: May 10, 2023, 04:22 AM
4
votes
2
answers
2381
views
NpgSql - How to pass parameter into DO block in PostgreSQL query
I am trying to use a parameter passed into a PostgreSQL query inside a DO block within that query. It fails though, and seems to be parsing the statement as if the parameter was a column. Is it possible to reference a query parameter in this way? Here is my C# code (using Npgsql and Dapper as well):...
I am trying to use a parameter passed into a PostgreSQL query inside a DO block within that query. It fails though, and seems to be parsing the statement as if the parameter was a column. Is it possible to reference a query parameter in this way?
Here is my C# code (using Npgsql and Dapper as well):
private static async Task ParameterInDoBlock(IDbConnection postgresConn)
{
DynamicParameters queryParams = new DynamicParameters();
queryParams.Add("some_variable", 123);
//example of successfully using a parameter
string sql = @"SELECT @some_variable;";
var row = await postgresConn.QuerySingleOrDefaultAsync(sql, queryParams);
Console.WriteLine($"Simple select: {row}");
//this one unexpectedly fails
sql = @"
DO
$$
BEGIN
IF @some_variable = 1 THEN
RAISE EXCEPTION 'blah';
END IF;
END
$$;";
await postgresConn.QuerySingleOrDefaultAsync(sql, queryParams);
/*
Exception data:
Severity: ERROR
SqlState: 42703
MessageText: column "some_variable" does not exist
InternalPosition: 2
InternalQuery: @some_variable = 1
Where: PL/pgSQL function inline_code_block line 3 at IF
File: parse_relation.c
Line: 3633
Routine: errorMissingColumn
*/
}
Anssssss
(248 rep)
Jan 23, 2023, 11:38 PM
• Last activity: Jan 24, 2023, 01:58 PM
1
votes
1
answers
1849
views
How to parameterize a Postgres JSON containment query with arrays of objects?
I have a JSON containment query as below. The query looks for matching objects inside arrays. This works. The values of Name and DataType will be passed in to a .Net function, and the query built in NpgSql. I want to parameterize the values (city, string) to avoid SQL injection. How to achieve this?...
I have a JSON containment query as below. The query looks for matching objects inside arrays. This works.
The values of Name and DataType will be passed in to a .Net function, and the query built in NpgSql. I want to parameterize the values (city, string) to avoid SQL injection. How to achieve this?
As per [this GitHub issue](https://github.com/lib/pq/issues/368) , I've tried building the json using jsonb_build_object, but I need to build arrays of objects. I'm told aggregate functions can't be used in WHERE, so that doesn't work.
So, can I build an array of objects to add the parameter values to, or is there a better way of avoiding SQL injection in this query?
We are currently using Postgres 10.18, so ideally the solution should work in that. However, we will soon upgrade to 14, so better solutions for 14 would also be of interest.
SELECT name FROM (SELECT name, fields -> 'dimensions' as dimensions
from data) x
WHERE dimensions @> '[{"Name": "city"}, {"DataType": "string"}]'
Paul Guz
(13 rep)
Sep 8, 2022, 11:28 AM
• Last activity: Sep 8, 2022, 05:03 PM
0
votes
0
answers
413
views
How to efficiently do partial updates on huge dataset in Postgresql
**Disclaimer**: I am a total "newb" in regards to PgSql, but due to some unfortunate aligment of circumstances I am in charge of this project which I have to rewrite/migrate away from SqlServer (alone), so there's that. The process is fairly straightforward: I have multiple sources of unique data wh...
**Disclaimer**: I am a total "newb" in regards to PgSql, but due to some unfortunate aligment of circumstances I am in charge of this project which I have to rewrite/migrate away from SqlServer (alone), so there's that.
The process is fairly straightforward: I have multiple sources of unique data which I must check every day for updates, import the changes into table *Items* and aggregate the results into *Aggregates*.
The tables are as following:
CREATE TABLE public."Items" (
"Md5Hash" bytea NOT NULL,
"SourceId" int4 NOT NULL,
"LastModifiedAt" timestamp NULL,
"PropA" int4 NOT NULL,
"PropB" timestamp NULL,
"PropC" bool NULL,
...
CONSTRAINT "PK_Items" PRIMARY KEY ("Md5Hash", "SourceId"),
CONSTRAINT "FK_Items_Sources_SourceId" FOREIGN KEY ("SourceId") REFERENCES "Sources"("Id") ON DELETE RESTRICT
);
CREATE INDEX "IX_Items_SourceId" ON public."Items" USING btree ("SourceId");
CREATE TABLE public."Aggregates" (
"Id" int4 NOT NULL GENERATED BY DEFAULT AS IDENTITY,
"UniqueIdentifier" varchar(150) NOT NULL,
"Md5Hash" bytea NOT NULL, Merge Join (cost=2.27..18501202.86 rows=1 width=409) (actual time=1323.481..625667.850 rows=262679 loops=1)
Merge Cond: ("Aggregates"."Md5Hash" = agg."Md5Hash")
Buffers: shared hit=189969175 read=2738925 dirtied=258646 written=237386
-> Sort (cost=1.70..1.70 rows=1 width=103) (actual time=1321.782..1444.692 rows=262679 loops=1)
Sort Key: "Aggregates"."Md5Hash"
Sort Method: quicksort Memory: 50901kB
Buffers: shared hit=121000 read=35
-> Index Scan using "IX_Aggregates_Aggregated" on "Aggregates" (cost=0.57..1.69 rows=1 width=103) (actual time=0.137..1080.727 rows=262679 loops=1)
Index Cond: ("Aggregated" = false)
Buffers: shared hit=121000 read=35
-> Subquery Scan on agg (cost=0.57..18432053.72 rows=27658963 width=362) (actual time=0.103..615067.274 rows=67537041 loops=1)
Buffers: shared hit=189848175 read=2738890 dirtied=258646 written=237386
-> GroupAggregate (cost=0.57..18155464.09 rows=27658963 width=169) (actual time=0.081..559315.620 rows=67537041 loops=1)
Group Key: "Items"."Md5Hash"
Buffers: shared hit=189848175 read=2738890 dirtied=258646 written=237386
-> Index Scan using "PK_Items" on "Items" (cost=0.57..7571149.04 rows=196337627 width=107) (actual time=0.052..331107.259 rows=191030895 loops=1)
Buffers: shared hit=189848175 read=2738890 dirtied=258646 written=237386
Planning Time: 0.370 ms
Execution Time: 656803.036 ms
ConfusedUser
(1 rep)
Apr 20, 2020, 09:15 AM
• Last activity: Apr 20, 2020, 03:45 PM
0
votes
0
answers
151
views
SQL fast in PSQL, very slow using MS.NET driver
There's [thread][1] with exact same title and problem, but with no answer, NPGSQL driver takes very long time to execute queries, but other language drivers or terminal works fine. From what I've found, it's related to parameter sniffing in other SQL databases, but I could not find much for Postgres...
There's thread with exact same title and problem, but with no answer, NPGSQL driver takes very long time to execute queries, but other language drivers or terminal works fine.
From what I've found, it's related to parameter sniffing in other SQL databases, but I could not find much for Postgres.
I'm running UPDATE statement, so it doesn't return many rows.
.NET Core 3.0, NPGSQL 4.1.1, PostgreSQL 10.9 (Ubuntu)
iNeedHealing255
(1 rep)
Oct 25, 2019, 06:49 AM
• Last activity: Oct 25, 2019, 09:33 AM
0
votes
1
answers
204
views
postgres npgsql DDL statement success or failure
I am using npgsql with a postgres DB, writing a DB updater that will need to execute a series of (usually) DDL statements like "ALTER TABLE ADD COLUMN blah..." and the like. The simple way to do this is with Command.ExecuteNonQuery, however I need to know if each statement succeeded or not. ExecuteN...
I am using npgsql with a postgres DB, writing a DB updater that will need to execute a series of (usually) DDL statements like "ALTER TABLE ADD COLUMN blah..." and the like. The simple way to do this is with Command.ExecuteNonQuery, however I need to know if each statement succeeded or not. ExecuteNonQuery returns "the number of rows effected" but it appears that this is always 0 for DDL regardless of success or failure, since the content of the table is not altered, only the structure.
What is the simplest way of determining the success or failure of arbitrary DDL (and update) commands read in from a text file? Can/Should/Must I wrap each statement in a procedure with an exception handler? Is there a simpler, more straightforward way?
One issue I see with wrapping in a procedure is that the procedure itself must then be part of the DB, and the update process is no longer as independent. I would like to avoid this. Of course, my updater could create the procedure if needed, and remove it after, but this seems cumbersome. Am I missing something obvious?
________Edit for clarification:
@Laurenz Albe is of course completely correct, if there is a syntax error, key duplication, missing column referenced, etc., an exception will be thrown. This probably covers nearly all cases. However, I have encountered issues where the SQL is valid but may or may not actually do anything useful - for example, if there is a WHERE clause, I may be expecting some (particular) number of 'rows effected'. The absence of an exception merely tells me that the statement was valid. When can or cannot I rely on the return value of ExecuteNonQuery, or is there a simple alternative?
user4930
Oct 21, 2019, 07:49 PM
• Last activity: Oct 22, 2019, 11:31 AM
4
votes
2
answers
1465
views
Postgres 11 - Fetching results after a transactiom
With the recent update of [Postgres 11][1]. It now supports procedures. Transactions are finally supported in stored procedures in Postgres. However, I have been trying to perform a transaction and retrieving a refcursor as the result. CREATE OR REPLACE PROCEDURE testproc(pid text,pname text,INOUT c...
With the recent update of Postgres 11 . It now supports procedures. Transactions are finally supported in stored procedures in Postgres. However, I have been trying to perform a transaction and retrieving a refcursor as the result.
CREATE OR REPLACE PROCEDURE testproc(pid text,pname text,INOUT cresults refcursor)
LANGUAGE plpgsql
AS $procedure$
BEGIN
cresults:= 'cur';
begin
update "testtable" set id=pid , name=pname where id=pid;
commit;
end;
OPEN cresults for select * from oureadata.CMS_CATEGORY limit 1;
end;
$procedure$
I understand that fetching of refcursor have to be in a transaction. This is how i execute the procedure.
BEGIN;
Call oureadata.testproc('1','2','');
fetch all in cur;
commit;
When i try to fetch the cursor, it throws an exception "ERROR: Invalid transaction termination"
-------
But if i remove the commit from the procedure. I can actually execute the procedure and fetch the result of the refcursor. (The above is just an example, as i have other more complex transaction which will return refcursor after)
So my question is, can a procedure return a INOUT refcursor result after a transaction is done?
deviantxdes
(141 rep)
Jul 1, 2018, 03:40 AM
• Last activity: Sep 23, 2019, 09:03 PM
2
votes
1
answers
1916
views
Concurrency / Transaction Concerns with PostgreSQL Copy (bulk insert)
We're building a CQRS system backed by Postgres. As part of this, we generate 'readmodels' which are precalculated projections of event sourced data. We regularly need to create new tables, query event sourced data, calculate a projection and insert thousands of rows (mostly containing jsonb data) i...
We're building a CQRS system backed by Postgres. As part of this, we generate 'readmodels' which are precalculated projections of event sourced data. We regularly need to create new tables, query event sourced data, calculate a projection and insert thousands of rows (mostly containing jsonb data) into new tables.
These inserts need to be done inside a transaction which also includes reads and writes to other tables.
Our initial code used standard
... ON CONFLICT UPDATE
commands and achieved about 100 rows per second(). When however I changed it to use ... FROM STDIN (FORMAT BINARY)
the throughput improved by a factor of about 100. I'm aware that I can't do 'upserts' in the same way with the second method but was just planning to delete records before the COPY insertion (inside a transaction).
I'm thinking to use COPY flow of our application. I've received criticism however along the lines of "I doubt it will work. COPY is designed for bulk import. You'll have concurrency issues".
My gut feel is that with the MVCC and transactional DDL of Postgres, it's probably robust enough to handle something like this without breaking a sweat. I've had fear, uncertainty and doubt sown in me though!
My questions:
1/ Can I use COPY in normal transactions as I would other statements?
2/ Can mass COPY loads happen while the database is still live with normal production inserts and selectes still running against other tables?
3/ Can multiple connections concurrently perform COPY operations? (against different tables)
4/ Am I crazy for thinking to treat this like any other PG command? Is it designed to be used against live, production databases or is it more of a housekeeping tool.
We're using NPGSQL as the client library to Postgres.
Thanks!
Damien Sawyer
(225 rep)
Aug 12, 2018, 06:04 AM
• Last activity: Aug 12, 2018, 07:18 AM
0
votes
1
answers
1510
views
What is the relationship between ADO.NET and Npgsql?
There is a sentence in the Npgsql official page: > Npgsql is an open source ADO.NET Data Provider for PostgreSQL. I know that I can access a Postgresql database using Npgsql with the .NET framework. Is Npgsql a subset of ADO.NET or is it derived from ADO.NET?
There is a sentence in the Npgsql official page:
> Npgsql is an open source ADO.NET Data Provider for PostgreSQL.
I know that I can access a Postgresql database using Npgsql with the .NET framework.
Is Npgsql a subset of ADO.NET or is it derived from ADO.NET?
FortisFortuna
(1 rep)
Dec 17, 2016, 12:58 PM
• Last activity: May 6, 2018, 04:58 AM
4
votes
4
answers
20141
views
Trying to connect to PostgreSQL with Entity Framework via Npgsql and C#
So, I'm new to databases and only have done a little with PostgreSQL before. I have C# (visual studio 2012) and have downloaded the Entity framework (6). I have also downloaded the latest Npgsql data connection driver and used this tutorial: [Using Entity Framework 6 with Npgsql 2.1.0][1] Now I have...
So, I'm new to databases and only have done a little with PostgreSQL before. I have C# (visual studio 2012) and have downloaded the Entity framework (6). I have also downloaded the latest Npgsql data connection driver and used this tutorial:
Using Entity Framework 6 with Npgsql 2.1.0
Now I have of course installed PostgreSQL and referenced the two dll's in my C# project:
npgsql.dll
and npgsql.entityframework.dll
Now I need to know how to add, view, delete stuff via C# but I can't find any material on using C#, Npgsql and Entity Framework together.
With no material on using Npqsql and Entity Framework 6, I have instead found this tutorial for version 5:
Create Entity Data Model in Entity Framework 5.0
However, under choose your data connection, when I add a new connection, there is no option to use Npqsql. I really am confused and have been trying to control PostgreSQL via C# for about a week now, I have tried almost every link I can find on Google and am close to giving up. A lot of tutorials show how to drive PostgreSQL directly via Npgsql without using Entity Framework, but I really wanted to use the Entity Framework because I've been told it makes manipulating the database much easier.
Joseph
(63 rep)
Aug 2, 2014, 03:46 PM
• Last activity: May 6, 2018, 04:56 AM
3
votes
0
answers
3972
views
Npgsql Timeout for simple query when run in parallel
I'm struggling to determine the cause of an exception when the code is run in parallel, but works fine when run one at a time. As background information, I'm running a set of unit tests, where each test picks up a database from a pool of databases and clears the data from it. The pool is created jus...
I'm struggling to determine the cause of an exception when the code is run in parallel, but works fine when run one at a time.
As background information, I'm running a set of unit tests, where each test picks up a database from a pool of databases and clears the data from it. The pool is created just in time, one database at a time as needed. (I acquire a pg_advisory_lock around creation of the database to prevent threading issues seemingly inside of postgres). The postgres server is hosted inside of a Vagrant machine running
ubuntu/xenial64
and postgres 9.6
When calculating the tables to clear data from, I run the following SQL:
SELECT target_table.table_schema || '.' || target_table.table_name AS TargetTable,
source_table.table_schema || '.' || source_table.table_name AS SourceTable,
source_column.column_name AS SourceColumn,
source_column.is_nullable::boolean AS SourceColumnIsOptional
FROM information_schema.table_constraints AS source_table
JOIN information_schema.key_column_usage AS kcu ON source_table.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage AS target_table ON target_table.constraint_name = source_table.constraint_name
JOIN information_schema.columns AS source_column ON kcu.table_name = source_column.table_name AND kcu.table_schema = source_column.table_schema AND kcu.column_name = source_column.column_name
WHERE constraint_type = 'FOREIGN KEY';
When run one test at a time, it works fine. When run inside of DataGrip console, it takes about 750ms to execute (which is surprisingly slow, but good enough). However when I run several unit tests concurrently (e.g. 3 or more) then the following exception is thrown pretty reliably:
Npgsql.NpgsqlException
HResult=0x80004005
Message=Exception while reading from stream
Source=
StackTrace:
at Npgsql.ReadBuffer.d__27.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Npgsql.NpgsqlConnector.d__157.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ValueTaskAwaiter`1.GetResult()
at Npgsql.NpgsqlConnector.d__156.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ValueTaskAwaiter`1.GetResult()
at Npgsql.NpgsqlConnector.d__163`1.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ValueTaskAwaiter`1.GetResult()
at Npgsql.NpgsqlDataReader.d__32.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Npgsql.NpgsqlDataReader.NextResult()
at Npgsql.NpgsqlCommand.d__71.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ValueTaskAwaiter`1.GetResult()
at Npgsql.NpgsqlCommand.d__92.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ValueTaskAwaiter`1.GetResult()
at Npgsql.NpgsqlCommand.ExecuteDbDataReader(CommandBehavior behavior)
at System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior)
at PeregrineDb.Databases.Mapper.SqlMapper.d__10`1.MoveNext()
at System.Collections.Generic.List1..ctor(IEnumerable
1 collection)
at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
at PeregrineDb.Databases.DefaultSqlConnection.Query[T](SqlCommand& command, Nullable`1 commandTimeout)
at PeregrineDb.Databases.DefaultDatabase1.PeregrineDb.ISqlConnection.Query[T](SqlCommand& command, Nullable
1 commandTimeout)
at InteractiveReports.Infrastructure.Postgres.PgDatabaseWiper.GenerateCommands(String connectionString, String lockKey) in C:\Dev\Git\IReports\src\InteractiveReports.Infrastructure\Postgres\PgDatabaseWiper.cs:line 40
Inner Exception 1:
IOException: Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Inner Exception 2:
SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
I use the connection string Server=10.10.3.202; Port=5432; User Id=postgres; Password=postgres123; Pooling=false;
and automatically insert a database name with a GUID in it. I've disabled pooling so that the database can be dropped reliably after disposing of all NpgsqlConnections.
I'm not sure where to start looking in diagnosing why I get a timeout when run in parallel.
---
**Update**
When I skip this particular query, everything works fine, so I'm pretty sure it's something about that query. I've added a pg_advisory_lock
around it to try make it serial, but it still errors!
The only thing I can think of is it's querying the information_schema "too soon" after creating the database (which is made from scratch and ~70 tables created). But I don't know why it only errors when done in parallel. Does querying the information_schema
of one database get blocked by the creation of another database?
berkeleybross
(175 rep)
May 2, 2018, 12:47 PM
• Last activity: May 3, 2018, 04:34 AM
Showing page 1 of 13 total questions