Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
391
views
Optimizing TimescaleDB Setup for High-Volume Time-Series Data
I'm seeking advice on how to optimize my TimescaleDB setup, which handles a large volume of time-series data. I have around 20,000 time-series profiles with a one-year duration, using a quarterly time resolution (4 timestamps per hour). This amounts to approximately 700 million entries. My database...
I'm seeking advice on how to optimize my TimescaleDB setup, which handles a large volume of time-series data. I have around 20,000 time-series profiles with a one-year duration, using a quarterly time resolution (4 timestamps per hour). This amounts to approximately 700 million entries. My database is hosted on an Azure PostgreSQL server.
Here are the details of my setup:
**Hardware Specifications:**
4 vCores
16 GiB memory
512 GB storage
Database Structure:
I have two tables, one for the load profiles with the columns (id, time, value, sensor_id), and another table with the columns (id, sensor_id). There are two indexes on the load profile table, one on (sensor_id, time), and another on sensor_id.
**Sample Query:**
A typical query I use to aggregate data is:
SELECT AVG(value), time
FROM public.loadprofilepool
WHERE sensor_id IN (
SELECT id
FROM public.sensor_table
ORDER BY RANDOM()
LIMIT 500
)
GROUP BY time;
Please note that this is a sample query where the list of sensor_ids is generated on the fly. In a real situation, the list of ids would come from elsewhere.
**Data Distribution:**
For now, there are 24 * 4 * 365 rows (one year duration, quarterly) per sensor and there are 20,000 sensors. In the future, there will also be live sensor data, which data distribution will depend on the specific sensor.
**Performance Metrics:**
When running these queries, the CPU usage does not exceed 20% and memory usage is constant at about 40%.
Given these details, I'm struggling with query speed. Extracting 10 to 1000 profiles and summing them up to generate a timeseries for each timestamp currently takes about 5 to 20 seconds, whereas my target is less than 5 seconds.
**My questions are as follows:**
1. Is my current setup the most efficient for handling and querying this volume and type of time-series data? If not, could you suggest alternative methods? I've considered NoSQL databases, cloud storage with Zarr or NetCDF files, but I'm not sure which, if any, would be more suitable.
2. How can I optimize my current setup to achieve faster query results? Are there specific TimescaleDB or PostgreSQL configurations or optimizations, indexing strategies, or query formulation tactics that would help improve performance?
Thank you in advance for your help. Any suggestions or guidance would be greatly appreciated.
Best regards,
Hannes
Hannes
(1 rep)
Jul 5, 2023, 03:20 PM
• Last activity: May 8, 2025, 03:08 PM
3
votes
2
answers
17096
views
psql (PostgreSQL client) tool install on RHEL
I would like to install just the PostgreSQL client tool (psql) on RHEL on a container to connect to an Azure database for PostgreSQL server. Can you please point me step by step instructions on how to do the same?. I do not want to install the entire postgresql server
I would like to install just the PostgreSQL client tool (psql) on RHEL on a container to connect to an Azure database for PostgreSQL server. Can you please point me step by step instructions on how to do the same?. I do not want to install the entire postgresql server
kevin
(133 rep)
Sep 1, 2021, 09:21 AM
• Last activity: May 22, 2024, 09:09 AM
3
votes
0
answers
318
views
Why is the admin user becoming a member of newly created role in Postgres?
I am quite new to PostgreSQL. We have a database server on Azure Postgres Flexible Server. I log in to that server using the admin credentials, dmsadmin. Then I create a new role using the command, `CREATE ROLE dev_user1`. Now running the `\du`, gives me the following output -> Role name | Attribute...
I am quite new to PostgreSQL. We have a database server on Azure Postgres Flexible Server. I log in to that server using the admin credentials, dmsadmin. Then I create a new role using the command,
CREATE ROLE dev_user1
. Now running the \du
, gives me the following output ->
Role name | Attributes | Member of
-------------------+------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------
azure_pg_admin | Cannot login | {pg_read_all_data,pg_monitor,pg_read_all_settings,pg_read_all_stats,pg_stat_scan_tables,pg_checkpoint,pg_create_subscription}
azuresu | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
dev_user1 | | {}
dmsadmin | Create role, Create DB, Bypass RLS | {pg_read_all_settings,pg_read_all_stats,pg_stat_scan_tables,azure_pg_admin,dev_user1}
replication | Replication | {pg_use_reserved_connections}
Now if I understand the output correctly, it says that dmsadmin
is a member of dev_user1
. So I tried to remove that membership using the command REVOKE dev_user1 FROM dmsadmin
. Then I got the following warning:
WARNING: role "dmsadmin" has not been granted membership in role "dev_user1" by role "dmsadmin".
How is that possible?
Thank you in advance.
Amit Kumar
(131 rep)
May 3, 2024, 01:29 PM
1
votes
1
answers
3501
views
Azure Postgres Unable to create database - errors with must be able to SET_ROLE
We've set up a Azure Postgres database and are trying to run our database/schema/tables creations scripts. These script have worked fine on our VM/server based database hosts where we connect to the database as: `sudo -u postgres psql` But for AZ Postgres we connect remotely using: `psql -h mypsql.p...
We've set up a Azure Postgres database and are trying to run our database/schema/tables creations scripts. These script have worked fine on our VM/server based database hosts where we connect to the database as:
sudo -u postgres psql
But for AZ Postgres we connect remotely using:
psql -h mypsql.postgres.database.azure.com -U admuser -d postgres
Connection works fine, then we start our scripts which contain the following, and it fails on the create database
step:
postgres=> create user rsm_user with login password 'test123';
CREATE ROLE
postgres=> create role rsm_role;
CREATE ROLE
postgres=> grant rsm_role to rsm_user;
GRANT ROLE
postgres=> create database rsm_db with owner = rsm_role;
ERROR: must be able to SET ROLE "rsm_role"
postgres=> create database rsm_db with owner = rsm_user;
ERROR: must be able to SET ROLE "rsm_user"
Looking at the differences between our local and AZ remote instance we see that is superuser locally is postgres
:
postgres=# \du
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
rsm_user | | {rsm_role}
rsm_role | | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
On ZA remote:
List of roles
Role name | Attributes | Member of
----------------+------------------------------------------------------------+----------------------------------------------------------------------------------------------------------
admuser | Create role, Create DB | {pg_read_all_settings,pg_read_all_stats,pg_stat_scan_tables,azure_pg_admin,rsm_user,rsm_role}
azure_pg_admin | Cannot login | {pg_monitor}
azuresu | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
replication | Replication | {pg_use_reserved_connections}
rsm_role | Cannot login | {}
rsm_user | | {rsm_role}
The local postgres
user seems the equivalent of the remote azuresu
user, and the remote admuser
that we login as does not have the same level of privileges as postgres
.
is there a way to connect as azuresu
on the remote instance?
Or is that another step we need to to allow the CREATE DATABASE
command to work?
Pro West
(143 rep)
Mar 23, 2024, 01:58 PM
• Last activity: Mar 23, 2024, 02:51 PM
1
votes
1
answers
1281
views
Why is DISCARD ALL called many times in Azure PostgreSQL?
I am debugging a high CPU load issue with Azure PostgreSQL flexible server. In the time window where CPU usage spikes I am seeing a lot of utility queries. They don't take long to process but there are hundred thousands of them. [![enter image description here][1]][1] The queries are all `DISCARD AL...
I am debugging a high CPU load issue with Azure PostgreSQL flexible server. In the time window where CPU usage spikes I am seeing a lot of utility queries. They don't take long to process but there are hundred thousands of them.
The queries are all

DISCARD ALL
queries. What are those queries for and is it normal to see so much of them?
DarkLeafyGreen
(113 rep)
Jan 26, 2024, 08:00 AM
• Last activity: Jan 26, 2024, 10:18 AM
0
votes
0
answers
365
views
Permission Denied on Select tables after giving Select permission to new user in Azure Postgres SQL single server database
I have Azure postgresql single server database which I need to give read permission to a newly created user. ``` CREATE USER "test@hostName" PASSWORD 'XXXX'; GRANT CONNECT ON DATABASE "test" TO "test@hostName"; ``` ``` ALTER DEFAULT PRIVILEGES FOR USER pgadmin IN SCHEMA public GRANT SELECT ON TABLES...
I have Azure postgresql single server database which I need to give read permission to a newly created user.
But when this user executes a select query Permission Denied message appears and not allowed to select. Any help is appreciated.
Below the result for pg_class for the selected table. It shows that pgadmin has full rights and required user has read access.
this is the result I got when running the query:
CREATE USER "test@hostName" PASSWORD 'XXXX';
GRANT CONNECT ON DATABASE "test" TO "test@hostName";
ALTER DEFAULT PRIVILEGES
FOR USER pgadmin
IN SCHEMA public
GRANT SELECT ON TABLES TO "test@hostName" ;
SELECT *
FROM information_schema.role_table_grants
WHERE grantee = 'test@hostName';
This is the result


SELECT relacl FROM pg_class WHERE relname = 'tablename';
result:
"{pgadmin=a*r*w*d*D*x*t*/pgadmin,""\""d_reader@adb-psql-tckt-history\""=r/pgadmin""}"
SOUser
(31 rep)
Jun 16, 2023, 12:45 PM
• Last activity: Jun 19, 2023, 06:40 AM
5
votes
2
answers
4119
views
Azure postgresql really slow compared to VM
I work with postgresql (and Postgis) for years now on a VM with dockers, and I start to be used to tweak servers parameters and optimize request, and I never had the kind of problems I have with Azure Postgresql. The problem is the following: writing is slow (usually around X2 compared to normal PG)...
I work with postgresql (and Postgis) for years now on a VM with dockers, and I start to be used to tweak servers parameters and optimize request, and I never had the kind of problems I have with Azure Postgresql.
The problem is the following: writing is slow (usually around X2 compared to normal PG) but vacuum and index are EXTREMELY slow. At one point we need to create a PostGIS index on 500 millions row, on another server it takes around 30 minutes, on Azure instance it takes **MORE THAN A DAY**.
I tried to modify server parameters, checked the requests and finally went back to the basics to check the performance of the server, and I still don't understand if there is something I missed or if Azure Postgres has a big problem.
Here what I did to compare:
- Servers:
- PG TEST:
- basic docker postgres 11 hosted on a VM (SSD, enough ram and cores but it's not really relevant)
- command to launch:
Read performance seems to be pretty good, so it's really a writing problem, and the more problematic one for us is index creation, and I don't see what can explain that.
This is a big problem for us, and I doubt that this is the kind of difference that can change with a few tweaks in the parameters, except if there is something specific to Azure ?
Did I miss something big ? Am I the only who have this kind of performance ? Or is it something limited to the system itself ? (I read that size of disk impact IOPS but looking at the graphs there doesn't seem to be a problem here, and we tried to add disk and it didn't change much) Maybe flexible servers doesn't have this issue ?
**EDIT:**
We tested to create new Azure postgres services with 3 different types of services (of the 4 possibles) that we just create and didn't modify, to be sure. I did the same test as before, 2 times for each service, and took the mean. I added the reference (called above PG TEST) and the one we have in production (called above PG AZURE). There was a few change with the previous results, and flexible server seems to be a bit better but the problem is still the same:
docker run --rm -e POSTGRES_PASSWORD=pass -d postgres:11.14-stretch
- PG AZURE:
- azure database for postgresql, single server, memory optimized 8 cores, 1028Gb of storage
- default server parameters
- Test from local PC (I don't really care about response time):
- use pgbench and create a table with factor 50 (cmd: pgbench -i -s 50
)
- look at the time to create the table, vacuum and index
This is the most simple and reproductible I could find. The exact result can vary a little, but it's around the same idea:


robin loche
(150 rep)
Jan 14, 2022, 04:14 PM
• Last activity: Jun 1, 2023, 04:08 AM
1
votes
0
answers
300
views
What is Azure doing to cause PostgreSQL Flexible Server to throw error 08P01?
I'm running a Ruby on Rails application against Postgres servers. For a long time, I've struggled with the background `delayed_job` process falling over at seemingly random times, and needing to be restarted. The error messages in the `delayed_job` stack traces are ambiguous, but they seem to sugges...
I'm running a Ruby on Rails application against Postgres servers. For a long time, I've struggled with the background
delayed_job
process falling over at seemingly random times, and needing to be restarted. The error messages in the delayed_job
stack traces are ambiguous, but they seem to suggest that the database instance is being interrupted.
The other morning, delayed_job
and three other, long-running jobs were all killed. Thanks to networking problems at my company, I've finally started logging on my database instance, and it looks like these process-killing problems are happening when the database logs an error code of "08P01". According to PSQL docs, that's a connection exception with a protocol error. Nothing else should be connecting to the database right now, while the "housekeeping" is going on.
Maybe is could be the "critical" updates that it says will be automatically applied? I can't imagine that these are happening around once a week, and I can't find a log of this activity. Seems like there should be a public list to check against. Maybe these are logged, and now that I'm logging, I'll see that?
This feels like some sort of housekeeping that Microsoft is doing on Azure, restarting the database, or rerouting traffic to a different backhaul, but I don't know what that would be, let alone how to stop it (or at least schedule it). Does anyone know what could be interrupting my database connections every "several" days, and killing my jobs?
2022-12-09 11:16:15 UTC-639318ff.af6-LOG: invalid length of startup packet 08P01
2022-12-06 22:23:51 UTC-638fc0f7.5eb-LOG: invalid length of startup packet 08P01
I now see that I have 2 of these on the 8th. This is what I expected to get for "security" updates, but didn't know where to look. Unfortunately, it doesn't correlate with errors from the 6th and the 9th.
At Thursday, December 8, 2022 at 6:38:57 AM EST, the Azure monitoring system received the following information regarding your Azure Database for PostgreSQL Flexible Server:
Due to maintenance task being performed on your Azure Database for PostgreSQL - Flexible server, the server is temporarily unavailable.
At Thursday, December 8, 2022 at 6:38:49 AM EST, the Azure monitoring system received the following information regarding your Azure Database for PostgreSQL Flexible Server:
Due to maintenance task being performed on your Azure Database for PostgreSQL - Flexible server, the server is temporarily unavailable.
David Krider
(77 rep)
Dec 9, 2022, 06:35 PM
• Last activity: Dec 10, 2022, 05:33 PM
-1
votes
2
answers
2073
views
Postgres simple SQL select query sometimes latency jump to 900ms
I have an "insert only" table that store user information - id `bigint` -> Primary key, not null - user_id `bigint` -> foreign key, not null - first_name `character varying` null - middle_name `character varying` null - last_name `character varying` not null - email `character varying` not null - bi...
I have an "insert only" table that store user information
- id
- executing two queries with
bigint
-> Primary key, not null
- user_id bigint
-> foreign key, not null
- first_name character varying
null
- middle_name character varying
null
- last_name character varying
not null
- email character varying
not null
- birth_date date
not null
- phone_number character varying
not null
- mobile_number character varying
not null
- street_name character varying
null
and a user table (insert + update) (most of the times, I don't query this table)
- id
primary key, not null
- created_at
date, not null
- modified_at
date, null
I have an index on birth_date
column in user_info
table
and my query looks like this
SELECT * FROM user_info WHERE birth_date = @p1 and (first_name = @p2 or email = @p3) and (last_name = @p4 or email =@p3);
Most of the times, the query latency on production environment is 7-26ms
but sometimes, and it's happening frequently actually, the latency jumps up to 900ms
so I see in logs 3-4 queries with latency 180, 200, 700, 900ms
and then goes back to 7-26ms
.
- The table user_info
is insert-only, so no updates nor deletes
- I am using PostgreSQL 11
- The table user_info
contains 5 million records, data distribution looks good on production, most of birth_date(s)
are between 200-1000 records, but only one particular birth date has 110K record (Would this be the issue?)
- server configurations are the default as specified in (Azure Postgres single server)
- Server specs are (4 vCPU, 20 GB memory, Gen 5)
- Maximum concurrent connection to production db is 20 per second
- Best case scenario is only one SQL query executed (the one attached in this question)
- Worst case scenario is First: SQL Query attached in this question, Second: Insert query to user
table, Third Insert query to user_info
table
- Another scenario is First: SQL Query attached in this question, Second: Insert query to user_info
table, Third: update modified time in the user
table
- querying the pg_stat_user_tables
trying to understand how many sequential scan vs sequential scan yielded the following result:

explain analyze
from my local machine using pgAdmin
against azure postgres server, one with the birth date that contains the most count and one with another birth date, the rest of the values are completely random yieleded the following results
![enter image description here ][2]
My problem is:
- I need the latency to not exceed certain threshold
My questions are:
- What the reason behind the latency jump on user_info
table?
- Would breaking the relationship between the two tables solve the issue? Perhaps when doing updates on user
table user_info
table needs to update the foreign key column, because update is actually insert and delete which causes deadrows in the table?
- Is it the data-distribution? And what can I do to improve the latency?
**UPDATED:**
I enabled auto_explain on production, I used a condition to log only queries with latency > 40ms. And ran automated script that queries existing data, 4 of them is with birth date that has 100K row.
And as expected I saw only logs for that particular birth date value:
2022-09-19 {TIME REDACTED}-LOG: duration: 42.421 ms plan:
Query Text:
select * from unf.user_info unf
where (
unf.birth_date = $1
and ( unf.first_name = $2 or unf.email = $3 )
and ( unf.last_name = $4 or unf.email = $3 ))
Index Scan using idx_b_date on unf.user_info unf (cost=0.43..76483.96 rows=1 width=112) (actual time=0.044..42.411 rows=2 loops=1)
Output: id, user_id, birth_date, first_name, last_name, email, phone_number, mobile_number, street_name, created_date
Index Cond: (unf.birth_date = '{REDACTED} 00:00:00'::timestamp without time zone)
Filter: ((((unf.last_name)::text = '{REDACTED}'::text) OR ((unf.email)::text = '{REDACTED}'::text)) AND (((unf.first_name)::text = '{REDACTED}'::text) OR ((unf.email)::text = '{REDACTED}'::text)))
Rows Removed by Filter: 119228
Buffers: shared hit=11025
2022-09-19 {TIME REDACTED}--LOG: duration: 41.370 ms plan:
Query Text:
select * from unf.user_info unf
where (
unf.birth_date = $1
and ( unf.first_name = $2 or unf.email = $3 )
and ( unf.last_name = $4 or unf.email = $3 ))
Index Scan using idx_b_date on unf.user_info unf (cost=0.43..76483.96 rows=1 width=112) (actual time=0.087..41.359 rows=2 loops=1)
Output: id, user_id, birth_date, first_name, last_name, email, phone_number, mobile_number, street_name, created_date
Index Cond: (unf.birth_date = '{REDACTED} 00:00:00'::timestamp without time zone)
Filter: ((((unf.last_name)::text = '{REDACTED}'::text) OR ((unf.email)::text = '{REDACTED}'::text)) AND (((unf.first_name)::text = '{REDACTED}'::text) OR ((unf.email)::text = '{REDACTED}'::text)))
Rows Removed by Filter: 119228
Buffers: shared hit=11025
2022-09-19 {TIME REDACTED}--LOG: duration: 41.709 ms plan:
Query Text:
select * from unf.user_info unf
where (
unf.birth_date = $1
and ( unf.first_name = $2 or unf.email = $3 )
and ( unf.last_name = $4 or unf.email = $3 ))
Index Scan using idx_b_date on unf.user_info unf (cost=0.43..76483.96 rows=1 width=112) (actual time=0.079..41.682 rows=2 loops=1)
Output: id, user_id, birth_date, first_name, last_name, email, phone_number, mobile_number, street_name, created_date
Index Cond: (unf.birth_date = '{REDACTED} 00:00:00'::timestamp without time zone)
Filter: ((((unf.last_name)::text = '{REDACTED}'::text) OR ((unf.email)::text = '{REDACTED}'::text)) AND (((unf.first_name)::text = '{REDACTED}'::text) OR ((unf.email)::text = '{REDACTED}'::text)))
Rows Removed by Filter: 119228
Buffers: shared hit=11025
2022-09-19 {TIME REDACTED}--LOG: duration: 40.581 ms plan:
Query Text:
select * from unf.user_info unf
where (
unf.birth_date = $1
and ( unf.first_name = $2 or unf.email = $3 )
and ( unf.last_name = $4 or unf.email = $3 ))
Index Scan using idx_b_date on unf.user_info unf (cost=0.43..76483.96 rows=1 width=112) (actual time=0.057..40.568 rows=2 loops=1)
Output: id, user_id, birth_date, first_name, last_name, email, phone_number, mobile_number, street_name, created_date
Index Cond: (unf.birth_date = '{REDACTED} 00:00:00'::timestamp without time zone)
Filter: ((((unf.last_name)::text = '{REDACTED}'::text) OR ((unf.email)::text = '{REDACTED}'::text)) AND (((unf.first_name)::text = 'n'::text) OR ((unf.email)::text = '{REDACTED}'::text)))
Rows Removed by Filter: 119228
Buffers: shared hit=11025
Rickless
(101 rep)
Sep 18, 2022, 11:25 AM
• Last activity: Sep 19, 2022, 11:47 AM
0
votes
1
answers
317
views
Azure PostgreSQL archiver exited with exit code 1
I'm using `Azure PostgreSQL` server to store data in my application hosted on `Azure App Service`. Yesterday, at some point application started to throw exceptions with connection closed and broken pipes. At first glance it looked like a problem with `wait_time` property of datasource that was longe...
I'm using
Azure PostgreSQL
server to store data in my application hosted on Azure App Service
.
Yesterday, at some point application started to throw exceptions with connection closed and broken pipes.
At first glance it looked like a problem with wait_time
property of datasource that was longer then the Database.
However when I looked in database and went through logs, I've found the following logs. I'm mostly interested to the cause of this issue with archiver and could it be related that my connections were "dropped" so app stopped working?
2020-07-11 22:43:42 UTC-5f0866f0.f0-LOG: checkpoint complete (240): wrote 16 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=2.046 s, sync=0.031 s, total=2.390 s; sync files=13, longest=0.005 s, average=0.001 s; distance=77 kB, estimate=14627 kB
2020-07-11 22:47:06 UTC-5f0a416a.891c-LOG: connection received: host=127.0.0.1 port=49431 pid=35100
2020-07-11 22:47:06 UTC-5f0a416a.891c-LOG: connection authorized: user=azure_superuserdatabase=azure_sys SSL enabled (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, compression=off)
2020-07-11 22:48:07 UTC-5f0a41a7.8924-LOG: connection received: host=127.0.0.1 port=49689 pid=35108
2020-07-11 22:48:07 UTC-5f0a41a7.8924-LOG: connection authorized: user=azure_superuserdatabase=postgres SSL enabled (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, compression=off)
2020-07-11 22:48:09 UTC-5f0866f0.f0-LOG: checkpoint starting: immediate force wait
2020-07-11 22:48:10 UTC-5f0866f0.f0-LOG: checkpoint complete (240): wrote 30 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.046 s, sync=0.046 s, total=0.437 s; sync files=13, longest=0.009 s, average=0.003 s; distance=163 kB, estimate=13180 kB
2020-07-11 22:48:10 UTC-5f0866ee.68-LOG: received immediate shutdown request
2020-07-11 22:48:10 UTC-5f0a3f41.8914-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0a3f41.8914-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-07-11 22:48:10 UTC-5f0a3f41.8914-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0a3f35.8918-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0a3f35.8918-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-07-11 22:48:10 UTC-5f0a3f35.8918-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0a3f2b.8784-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0a3f2b.8784-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-07-11 22:48:10 UTC-5f0a3f2b.8784-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0a3f0d.8780-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0a3f0d.8780-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-07-11 22:48:10 UTC-5f0a3f0d.8780-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0a3ef1.8778-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0a3ef1.8778-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-07-11 22:48:10 UTC-5f0a3ef1.8778-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0a3ee7.8770-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0a3ee7.8770-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-07-11 22:48:10 UTC-5f0a3ee7.8770-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0a3eda.8774-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0a3eda.8774-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-07-11 22:48:10 UTC-5f0a3eda.8774-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0a3ece.8730-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0a3ece.8730-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-07-11 22:48:10 UTC-5f0a3ece.8730-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0a3ec6.872c-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0a3ec6.872c-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-07-11 22:48:10 UTC-5f0a3ec6.872c-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0a3ea9.8728-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0a3ea9.8728-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-07-11 22:48:10 UTC-5f0a3ea9.8728-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0a1412.7acc-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0a1412.7acc-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-07-11 22:48:10 UTC-5f0a1412.7acc-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f09f8a7.72a8-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f09f8a7.72a8-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-07-11 22:48:10 UTC-5f09f8a7.72a8-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0866f2.7c-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0866f2.7c-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-07-11 22:48:10 UTC-5f0866f2.7c-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0866ee.68-LOG: src/port/kill.c(84): Process (272) exited OOB of pgkill.
2020-07-11 22:48:10 UTC-5f0866f1.fc-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0866f1.fc-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-07-11 22:48:10 UTC-5f0866f1.fc-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0866ee.68-LOG: archiver process (PID 256) exited with exit code 1
2020-07-11 22:48:11 UTC-5f0866ee.68-LOG: database system is shut down
ashur
(103 rep)
Jul 12, 2020, 08:00 PM
• Last activity: Jul 13, 2020, 06:31 AM
1
votes
0
answers
270
views
pg_restore idling during second half of restore
We're migrating from PostgreSQL hosted on our VM to Azure Database for PostgreSQL managed database. One of the first problems we hit has been time to restore database dump. It took 20-30 min on our VM but on hosted solution it takes more than 4 hours. We are aware that there are many differences bet...
We're migrating from PostgreSQL hosted on our VM to Azure Database for PostgreSQL managed database. One of the first problems we hit has been time to restore database dump. It took 20-30 min on our VM but on hosted solution it takes more than 4 hours.
We are aware that there are many differences between these two environments but one thing that is confusing us the most is resource utilisation during pg_restore execution. Execution started at 10:23am and finished at 2:37pm but in period 12:46pm-2:37pm CPU and IO are both practically idling and we're having trouble understanding why.
Here's the graph of resource utilisation over that time:
Additional info:
- Our VM we are moving from is: Standard F4s (4 vCPUs, 8 GB memory) with 230 GB Premium SSD storage.
- Our hosted PostgreSQL environment is: Gen 5 General Purpose (2 vCores, 10 GB memory) with 122 GB Azure Premium Storage.
- Database is around 20 GB in size.
- We tried playing with number of jobs (2 and 4), increasing maintenance_work_mem via PGOPTIONS environment to 640 MB with no effect.
Does anyone have any ideas what might be causing this idling? For the first phase, it's clear that we need to increase IOPS limit (e.g. by purchasing more GB as IOPS limit is set my Azure to 3 IOPS/GB) to speed it up, but idling is puzzling us.

mario
(11 rep)
Jan 8, 2020, 03:41 PM
-1
votes
2
answers
244
views
What are the SSL options to connect my new users to my PostgreSQL database
I want to create a new user in my database PostreSQL database hosted on Azure. So, following [the documentation from Microsoft for pgAdmin,][1] I went to PGAdmin and ran the following command: CREATE ROLE rikkiWITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION PASSWORD 'Fete321@'; GRAN...
I want to create a new user in my database PostreSQL database hosted on Azure.
So, following the documentation from Microsoft for pgAdmin, I went to PGAdmin and ran the following command:
CREATE ROLE rikkiWITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION PASSWORD 'Fete321@';
GRANT azure_pg_admin TO rikki;
FLUSH PRIVILEGES;
It worked well. Yet, when trying to create a new server with this new login I had some issues
- I disabled SSL connection, so I don't have this error anymore
- For the password error, I just copied and paste the password I just created but it gives back this error.
So can I create new users that would be able to connect remotely to my PostgreSQL database?
I tried to connect myself through command line and got
C:\Program Files\PostgreSQL\12\bin>psql --host=######.postgres.database.azure.com --port=5432 --username=rikki@######--dbname=capi
Password for user rikki@######:
psql: error: could not connect to server: FATAL: password authentication failed for user "rikki"
FATAL: password authentication failed for user "rikki"

Revolucion for Monica
(677 rep)
Nov 22, 2019, 09:59 AM
• Last activity: Nov 22, 2019, 11:13 AM
1
votes
2
answers
1774
views
How to automate the detection of the column type of a csv file for a table creation script?
I am new to pgAdmin and Azure database. I have a huge csv file with around 220 columns and I want to create a table out of it in pgAdmin4 to push it to Azure. However I don't know how to detect automatically the types of columns. Here is an exemple : IDI GKID S01Q01 S02Q01_Gender ... 100093 enq03005...
I am new to pgAdmin and Azure database. I have a huge csv file with around 220 columns and I want to create a table out of it in pgAdmin4 to push it to Azure.
However I don't know how to detect automatically the types of columns.
Here is an exemple :
IDI GKID S01Q01 S02Q01_Gender ...
100093 enq030059569748fc89091fdd91cc337cac44eca90 Yes, I agree Female ...
I'm doing the script to create the table. However, given the number of columns I would like to automate it to get the script that would allow me to add the csv to the database in pgAdmin4 for Microsoft Azure.
Then, after transposing the csv header, I get:
IDI
GKID
S01Q01
S02Q01_Gender
...
I'm doing the script to create the table. However, given the number of columns I would like, in the best case, to automate the detection of the type of column to be able to write it in the right part, in the worst case to put a generic type of the type TEXT.
So far, I've tried
output = ""
file_name = "columns.txt"
string_to_add = " TINYTEXT,"
with open(file_name, 'r') as f:
file_lines = [''.join([x.strip(), string_to_add, '\n']) for x in f.readlines()]
with open(file_name, 'w') as f:
f.writelines(file_lines)
It gives me back:
IDI TINYTEXT,
GKID TINYTEXT,
S01Q01 TINYTEXT,
S02Q01_Gender TINYTEXT,
...
And, then, I can do:
CREATE TABLE my_table (
IDI TINYTEXT,
GKID TINYTEXT,
S01Q01 TINYTEXT,
S02Q01_Gender TINYTEXT,
...
But I'm not sure that this enough to make a table able to receive my csv file.
Revolucion for Monica
(677 rep)
Nov 14, 2019, 02:52 PM
• Last activity: Nov 15, 2019, 10:25 AM
Showing page 1 of 13 total questions