Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
0
answers
66
views
Can't grant privileges to regular user for pg_cron
I'm using PostgreSQL Flexible server in Azure. My objective is to refresh a materialized view in database "mytest". So i installed pg_cron from Azure server properties, it's installed on postgres database as default. I can schedule jobs with my Flexible Server Admin user (**myadmin**) but can't do t...
I'm using PostgreSQL Flexible server in Azure. My objective is to refresh a materialized view in database "mytest". So i installed pg_cron from Azure server properties, it's installed on postgres database as default.
I can schedule jobs with my Flexible Server Admin user (**myadmin**) but can't do the same with my regular user (lets say '**bob**'). I tried to give usage privilege on cron schema to bob, tried to grant all on tables under cron, but none of them work. the output for trying to grant functions:
WARNING: no privileges were granted for "schedule"
WARNING: no privileges were granted for "schedule"
WARNING: no privileges were granted for "job_cache_invalidate"
WARNING: no privileges were granted for "schedule_in_database"
WARNING: no privileges were granted for "unschedule"
WARNING: no privileges were granted for "unschedule"
WARNING: no privileges were granted for "alter_job"
GRANT
Query returned successfully in 144 msec.
cron schema seems to be owned by **azuresu** which i can't access due to restrictions on Azure.
How can i grant with **myadmin** to **bob** necessary privileges for pg_cron?
postgresnewbie
(127 rep)
Jan 20, 2025, 02:15 PM
• Last activity: Jan 28, 2025, 04:59 AM
0
votes
2
answers
138
views
Using pgdump_all and cron
I have a daily cron job to dump my database. Part of the crontab includes the following: ``` 01 01 * * * root /etc/cron.d/backupDaily.sh ``` and part of the backup script contains the following: ```sh cd /data/pgsql/ sudo -u postgres /usr/pgsql-12/bin/pg_dumpall>/data/pgsql/pg.sql ``` My old notes r...
I have a daily cron job to dump my database. Part of the crontab includes the following:
01 01 * * * root /etc/cron.d/backupDaily.sh
and part of the backup script contains the following:
cd /data/pgsql/
sudo -u postgres /usr/pgsql-12/bin/pg_dumpall>/data/pgsql/pg.sql
My old notes refer to putting the credentials in a file such as /.pgpass
. However, I have upgraded my server a few times since the beginning, and I don’t appear to have this file anymore.
Can anybody tell me how I get away with this? Does this suggest that my postgres
user doesn’t have a password?
Here is what’s in my pg_hba.conf
file:
# TYPE DATABASE USER ADDRESS METHOD
local replication all peer
host replication all 127.0.0.1/32 ident
host replication all ::1/128 ident
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
host all all 192.168.0.0/24 trust
host all all 192.168.1.0/24 trust
host all all 192.168.77.0/24 trust
Manngo
(3145 rep)
May 6, 2024, 06:41 AM
• Last activity: May 6, 2024, 02:34 PM
3
votes
2
answers
9982
views
"connection failed" error for pg_cron extension
Installed `pg_cron` and scheduled job, though there is connection error because job is not done and `cron.job_run_details` table shows `connection failed` error message. As [doc][1] says: > Important: Internally, pg_cron uses libpq to open a new connection to > the local database. It may be necessar...
Installed
pg_cron
and scheduled job, though there is connection error because job is not done and cron.job_run_details
table shows connection failed
error message.
As doc says:
> Important: Internally, pg_cron uses libpq to open a new connection to
> the local database. It may be necessary to enable trust authentication
> for connections coming from localhost in pg_hba.conf for the user
> running the cron job. Alternatively, you can add the password to a
> .pgpass file, which libpq will use when opening a connection.
Created file .pgpass
with needed parameters, but not sure how to "tell" to libpq that for this user, for this host and DB read this .pgpass
file.
Any help appreciated.
Oto Shavadze
(575 rep)
Aug 11, 2021, 09:06 AM
• Last activity: Dec 9, 2022, 12:19 PM
0
votes
2
answers
2963
views
PostgeSQL Schedule Jobs
I am looking a way that can give the ability to schedule jobs on PostgreSQL, for example to schedule execution of some procedures or statements based on interval or on specific schedule. I tried already the pg_agent but this is a service running in local pc and which enables a schema in PostgreSQL....
I am looking a way that can give the ability to schedule jobs on PostgreSQL, for example to schedule execution of some procedures or statements based on interval or on specific schedule.
I tried already the pg_agent but this is a service running in local pc and which enables a schema in PostgreSQL. As a result this cannot be used for remote instances or managed instances.
I am looking a way that they can be triggered from the database context and not from another runtime.
Any ideas?
Stavros Koureas
(170 rep)
Nov 24, 2022, 08:58 PM
• Last activity: Nov 28, 2022, 06:38 AM
0
votes
1
answers
1119
views
Supabase postgres cron job not working special case
I'm having the problem, while insering a row using rpc through the nodejs library, with user **service_role**. i gave permission to all roles. Without exception. And for **service_role** I even gave it all privileges in the whole database just for the sake of testing. launching the cron from the das...
I'm having the problem, while insering a row using rpc through the nodejs library, with user **service_role**.
i gave permission to all roles. Without exception.
And for **service_role** I even gave it all privileges in the whole database just for the sake of testing.
launching the cron from the dashboard editor. And after giving the permission to **postgres** as above. And as per this thread
It worked. The cron worked just right. But then when i do it through **service_role**. From nodejs rpc insert. Which trigger a trigger procedure. Which run all right. I made a command to ccheck which insert in a log table I created. In mean time, the job is added to the jobs table. And being active. However no command run. (the mentioned user I gave him the privileges for cron. But then after I gave him even access to all. Nothing works.) Any ideas or pointing. Here my trigger function
The schedule command is temporal I took from the one that worked on the dashboard to make sure ..?
How can this be explained ? Any ideas or points ? Anything may be special about supabase ? May be the **service_role** user have some restriction ??? or . i'm just out of clues hhh
( an official ref: https://supabase.io/blog/2021/03/05/postgres-as-a-cron-server )
It worked. The cron worked just right. But then when i do it through **service_role**. From nodejs rpc insert. Which trigger a trigger procedure. Which run all right. I made a command to ccheck which insert in a log table I created. In mean time, the job is added to the jobs table. And being active. However no command run. (the mentioned user I gave him the privileges for cron. But then after I gave him even access to all. Nothing works.) Any ideas or pointing. Here my trigger function

Mohamed Allal
(111 rep)
Aug 3, 2021, 10:08 PM
• Last activity: Jul 6, 2022, 08:01 AM
1
votes
0
answers
655
views
Refresh materialized view on Postgresql 11 on RDS
We are currently on Postgres 11.13 on AWS RDS. I am trying to create a materialized view that takes about 6-7 minutes to run. What is the best way to keep this MV mostly up to date? I was thinking of pg_cron but I believe that is only available on PG 12.5+. A trigger on the underlying tables could b...
We are currently on Postgres 11.13 on AWS RDS. I am trying to create a materialized view that takes about 6-7 minutes to run. What is the best way to keep this MV mostly up to date? I was thinking of pg_cron but I believe that is only available on PG 12.5+. A trigger on the underlying tables could be an option, but these tables either do not get updated at all, or underlying tables have many many inserts that occur in a short period of time. I don't want to trigger excessive refreshes.
Any suggestions for this scenario?
Asif
(111 rep)
Apr 6, 2022, 08:55 PM
3
votes
2
answers
319
views
mysql backup password without root home
What's the best way to provide a MySql root password (`~/.my.cnf`) to cron job without root to have the home folder? the server I'm using is centos
What's the best way to provide a MySql root password (
~/.my.cnf
) to cron job without root to have the home folder?
the server I'm using is centos
Rait
(47 rep)
Apr 5, 2022, 01:28 PM
• Last activity: Apr 6, 2022, 04:59 PM
1
votes
0
answers
52
views
Replicating/Inserting 300M MYSQL records from 6 Remote Servers on Local using Unique Identifier
I have 6 remote servers that dial phone numbers daily. I have a total record count of over 300 million combined and now I want to procress on them. The table data looks like this using this query ``` SELECT lead_id, entry_date, modify_date, `status`, list_id, phone_number, called_count FROM vicidial...
I have 6 remote servers that dial phone numbers daily. I have a total record count of over 300 million combined and now I want to procress on them. The table data looks like this using this query
SELECT lead_id,
entry_date,
modify_date,
status
,
list_id,
phone_number,
called_count
FROM vicidial_list
WHERE list_id = '202202183'
LIMIT 10;
lead_id entry_date modify_date status list_id phone_number called_count
------- ------------------- ------------------- ------ --------- ------------ --------------
8455687 2022-02-18 12:08:51 2022-02-18 14:08:17 PU 202202183 XXXXXX1775 3
8455688 2022-02-18 12:08:51 2022-02-18 13:57:20 PU 202202183 XXXXXX0485 2
8455689 2022-02-18 12:08:51 2022-02-18 13:57:20 NA 202202183 XXXXXX2277 2
8455690 2022-02-18 12:08:51 2022-02-18 14:53:55 PU 202202183 XXXXXX8044 13
8455691 2022-02-18 12:08:51 2022-02-18 13:57:20 NA 202202183 XXXXXX1802 2
8455692 2022-02-18 12:08:51 2022-02-18 13:15:55 NA 202202183 XXXXXX9687 1
8455693 2022-02-18 12:08:51 2022-02-18 13:15:55 NA 202202183 XXXXXX0603 1
8455694 2022-02-18 12:08:51 2022-02-18 13:15:55 NA 202202183 XXXXXX1667 1
8455695 2022-02-18 12:08:51 2022-02-18 14:53:02 AA 202202183 XXXXXX8783 13
8455696 2022-02-18 12:08:51 2022-02-18 13:15:55 NA 202202183 XXXXXX2902 1
I need to perform queries on these but the problem is:
1. I don't want to manipulate data on 6 Remote Master DB's. Becuase this slows down performance of dialing solution during production hours.
2. I want to be able to perform queries (add columns to default table and update it) on one combined server and not individually on each servers.
I want to be replicate/insert them on Local DB where I want to add two more columns in addititon to existing columns where:
1. One new column is "Suffix of DB IP+lead_id" e.g. for server W.X.Y.Z, "Z8455687" as first entry.
2. Another column, which shows count_exported, default is 0.
As I have no experience of DB Administration, I am looking for ways to solve this. I want to replicate these on one local system daily during non production hours? Pleae help me how to do this. Do I use MySQL sheduler or cronjob? And how to go about it. Following is how I want my new table to look, if my remote DB IP suffix are, A,B,C,D,E and F:
srv_id server_ lead_id entry_date modify_date status list_id phone_number called_count called_count
------- ------- ------- ------------------- ------------------- ------ --------- ------------ ------------ --------------
X.Y.Z.A A8455687 8455687 2022-02-18 12:08:51 2022-02-18 14:08:17 PU 202202183 XXXXXX1775 3 0
X.Y.Z.A A8455688 8455688 2022-02-18 12:08:51 2022-02-18 13:57:20 PU 202202183 XXXXXX0485 2 0
X.Y.Z.B B8455689 8455689 2022-02-18 12:08:51 2022-02-18 13:57:20 NA 202202183 XXXXXX2277 2 0
X.Y.Z.B B8455690 8455690 2022-02-18 12:08:51 2022-02-18 14:53:55 PU 202202183 XXXXXX8044 13 0
X.Y.Z.C C8455691 8455691 2022-02-18 12:08:51 2022-02-18 13:57:20 NA 202202183 XXXXXX1802 2 0
X.Y.Z.D D8455692 8455692 2022-02-18 12:08:51 2022-02-18 13:15:55 NA 202202183 XXXXXX9687 1 0
X.Y.Z.E E8455693 8455693 2022-02-18 12:08:51 2022-02-18 13:15:55 NA 202202183 XXXXXX0603 1 0
X.Y.Z.F F8455694 8455694 2022-02-18 12:08:51 2022-02-18 13:15:55 NA 202202183 XXXXXX1667 1 0
X.Y.Z.F F8455695 8455695 2022-02-18 12:08:51 2022-02-18 14:53:02 AA 202202183 XXXXXX8783 13 0
X.Y.Z.D D8455696 8455696 2022-02-18 12:08:51 2022-02-18 13:15:55 NA 202202183 XXXXXX8783 1 0
Aun Zaidi
(111 rep)
Feb 19, 2022, 12:39 AM
• Last activity: Feb 20, 2022, 11:21 PM
-1
votes
1
answers
404
views
How to create a cronjob backup for mysql slave?
There are few other threads about this question using mysqldump but however i tend to not use mysqldump but instead backup the whole files in /mysql. I would like to create a cronjob in mysql slave which only involves ```start slave``` and ```stop slave```. I only know how to use crontab with starti...
There are few other threads about this question using mysqldump but however i tend to not use mysqldump but instead backup the whole files in /mysql.
I would like to create a cronjob in mysql slave which only involves
slave
and slave
.
I only know how to use crontab with starting and stopping mysql (as below). Instead i have no idea how to insert start and stop slave in crontab. Any help?
19 * * * service mysql stop;rsync -avzr -e 'ssh -p 22' /home/mysql /home/DAILY-BACKUP-SQL;service mysql start
Daniel
Oct 17, 2021, 02:57 PM
• Last activity: Oct 26, 2021, 04:18 PM
0
votes
0
answers
962
views
Schedule Postgres pg_cron jobs to trigger by something other than Cron expression
I have some pg_cron jobs that run at night time. They basically run some plpgsql Procedures and trigger based off CRON expressions. The problem is that some of my JOBS are dependent on the jobs that run before them. I have been dealing with that by inserting a record into a utility table I have crea...
I have some pg_cron jobs that run at night time. They basically run some plpgsql Procedures and trigger based off CRON expressions.
The problem is that some of my JOBS are dependent on the jobs that run before them. I have been dealing with that by inserting a record into a utility table I have created, but it's annoying to me that even the 1st job runs and inserts this record, the other job is constantly re-triggering because of the CRON expression. I do indeed have a conditional statement in the 2nd and subsequent jobs so they don't run if the 1st job didn't insert its record into the utility table, but I don't see to find any way to completely outright prevent the other jobs from running.
Also, I don't know when the 1st job will end, so if I technically set a specific CRON time, the 1st job might not be done, and the 2nd will never run (which I don't want).
Ideally, there would be someway for me to link the jobs together (if 1st job runs and completes, run the 2nd) but I seem to only be able to set expressions like (* * * * *), where basically the job has to repeatedly check all day.
Is this the best approach? Is there an alternative to setting Cron expressions for the pg_cron jobs or a way to link jobs?
PainIsAMaster
(131 rep)
Oct 9, 2021, 06:03 AM
0
votes
1
answers
1744
views
SQL Server create cron job without SQL Server Agent
I'm creating a cron job on SQL Server. As per my knowledge, the only way possible to create a job is through SQL Server Agent. However, the services(listed on services.msc) of VM assigned to me do not contain SQL Server Agent. Also, I don't have sysadmin access to the database. So, can anyone please...
I'm creating a cron job on SQL Server. As per my knowledge, the only way possible to create a job is through SQL Server Agent. However, the services(listed on services.msc) of VM assigned to me do not contain SQL Server Agent. Also, I don't have sysadmin access to the database. So, can anyone please help me understand how I can create a cron job without SQL Server Agent?
Yash
(13 rep)
Oct 8, 2021, 01:02 PM
• Last activity: Oct 8, 2021, 02:06 PM
1
votes
1
answers
966
views
Set crontab backup job for Mariadb Databases from 1 remote server to another
I'm connecting to a Ubuntu Server remotely using SSH, where are x number of MariaDB databases. Now I want to set a crontab job to automatically create backup for all of them to another server where are stored back-ups from differents servers. How can I do this?
I'm connecting to a Ubuntu Server remotely using SSH, where are x number of MariaDB databases.
Now I want to set a crontab job to automatically create backup for all of them to another server where are stored back-ups from differents servers.
How can I do this?
dan morcov
(25 rep)
Jun 10, 2021, 12:25 PM
• Last activity: Jun 15, 2021, 03:49 PM
2
votes
1
answers
1946
views
Query to delete all rows which are not referenced in any other tables
Let's say i have a table called **images**. **Images** table is referenced in many other tables (**movies**, **books**, **tv_series**) through a foreign key. My question is if it is possible to write a query which will delete all orphaned rows without specifying exact tables (eg `JOIN movies ON ...`...
Let's say i have a table called **images**. **Images** table is referenced in many other tables (**movies**, **books**, **tv_series**) through a foreign key.
My question is if it is possible to write a query which will delete all orphaned rows without specifying exact tables (eg
JOIN movies ON ...
). This is important because if new table which references **images** will be introduced it will make the query outdated. If the query will be executed without changes it could delete images which are used in new table.
Daniel G
(23 rep)
Feb 3, 2019, 04:16 PM
• Last activity: Feb 20, 2021, 11:06 AM
0
votes
0
answers
1298
views
pg_cron connection refused with postgres login
I am trying to run a function periodically with pg_cron but every time it wants to execute it throws connection refused. the cron job is created using the postgres user. pg_hba.conf ``` local all postgres md5 # TYPE DATABASE USER ADDRESS METHOD # "local" is for Unix domain socket connections only lo...
I am trying to run a function periodically with pg_cron but every time it wants to execute it throws connection refused. the cron job is created using the postgres user.
pg_hba.conf
local all postgres md5
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 0.0.0.0/0 md5
host all all 127.0.0.1/32 md5
host postgres postgres localhost trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all peer
host replication all 127.0.0.1/32 md5
host replication all ::1/128 md5
I've tried adding a .pgpass file but, I don't know where to put it.
this is my machine version:
Debian GNU/Linux 10 (buster)
rafaelBackx
Feb 18, 2021, 04:49 PM
• Last activity: Feb 18, 2021, 08:24 PM
1
votes
4
answers
446
views
Integration: Keep two systems in sync
I have a GIS system with 40 tables, ranging from 1,000 to 60,000 rows per table. The tables are the [system of record][1] for **assets** in a municipality. The GIS assets in the tables get integrated to a [Workorder Management System][2] (WMS) on a weekly basis. The integration is based on [web serv...
I have a GIS system with 40 tables, ranging from 1,000 to 60,000 rows per table. The tables are the system of record for **assets** in a municipality.
The GIS assets in the tables get integrated to a Workorder Management System (WMS) on a weekly basis. The integration is based on web services that serve up the GIS tables to the WMS.
**Constraint #1:**
The integration to the Workorder Management System is **multi-purpose**.
1. There is a single
asset
table in the WMS that gets updated, via **cron tasks**, with any edits that have been made to the GIS assets (new assets, changed assets, and decommissioned assets). Only assets that have been edited are updated in the WMS.
2. The integration is also used to dynamically serve up the assets to a **web map** in the WMS (all of the GIS assets are used in the map--not just the assets that have been edited). The map in the WMS connects directly to the GIS web services--it does not use the records in the asset
table or the cron tasks.
**Constraint #2:**
The WMS cron tasks are **notoriously slow**. Given my organization's infrastructure, my vendor says that the WMS cron tasks will only be able to sync **150 records per minute**.
- Testing is ongoing, but we have been told to only sync the records that actually need to be synced (edits) due to the significant performance concerns. In other words, we can't just integrate or copy *all* records, *all the time*.
- *To give you an idea, this is what the cron task process looks like:
REST GIS web service >> JSON object >> Parse the JSON into individual records >> Generate XML for each record >> Process the XML records with Java classes >> Insert the records into the database*.
**Constraint #3:**
GIS data is **notoriously messy**.
In constraint #2, I mentioned that the records get processed with Java classes. The Java classes check for errors (parent/child, field rules, etc.) and flag any records that fail.
- These records do not get integrated into the WMS.
- It is up to the GIS teams to correct the errors in GIS tables, then we'll try again in the next integration instance (next week) to sync the GIS records to the WMS.
---------------
**Question:**
Given the constraints above, I think I need to figure out a way to integrate all the GIS assets to the WMS (constraint 1.2), but also flag the records that need to be synced due to edits (constraint 1.1).
- For edited assets that failed to sync--I need to **retry** them in future syncs until they are successful.
- And I need to avoid syncing records unnecessarily--due to the performance concerns.
**How can I do this?**
User1974
(1527 rep)
Sep 22, 2019, 11:08 PM
• Last activity: Nov 9, 2020, 04:36 AM
5
votes
1
answers
2121
views
Passwordless mysqldump via shell script in /etc/cron.daily
I'm aware that there are dozens of questions similar to this, but it seems none has a definitive answer to my problem, so that's why I'm posting this... I hope in the right place. **The problem**: I have a script placed in `/etc/cron.daily` that performs a daily database backup among the other thing...
I'm aware that there are dozens of questions similar to this, but it seems none has a definitive answer to my problem, so that's why I'm posting this... I hope in the right place.
**The problem**:
I have a script placed in
/etc/cron.daily
that performs a daily database backup among the other things. It works fine as long as there is a password hardcoded into the script for the mysqldump command.
#!/bin/sh
$ mysqldump -u [uname] -p[pass] db_name > db_backup.sql
However, not wanting to have the password in the script, I've set up ~/.my.cnf
file (chmod 600) with my user's password stored there so the mysqldump
command in the script would be passwordless.
~/.my.cnf
[mysqldump]
password="pass"
--------------------
#!/bin/sh
$ mysqldump -u [uname] db_name > db_backup.sql
When I run this new script manually from the command line as root it works like a charm.
sudo sh /etc/cron.daily/daily-backup-script
But when cron
wants to run it it's unable to dump the database giving the following error:
mysqldump: Got error: 1045: Access denied for user 'user'@'localhost' (using password: NO) when trying to connect.
So, I assume cron
doesn't have appropriate privilege to perform the passwordless mysqldymp
command in the script, with password placed in ~/.my.cnf
, however the script and the passwordless mysqldump
command IN IT are working flawlessly from the command line with sudo
.
**Effort so far:**
1. I've tried sudo
in front of the mysqldump
command in the script.
2. I've tried sudo -u user
in front of the mysqldump
command in the script.
3. I've chown-ed the ~/.my.cnf
file as root:root
.
Vila
(223 rep)
Aug 20, 2017, 11:27 AM
• Last activity: Aug 4, 2020, 10:24 AM
2
votes
1
answers
146
views
Programming query execution Postgres
I need to execute update querys routinely over a Postgres table. Is it possible to program an automatic execution of the query, let's say, every day at 15:00 hs? The Postgres version is 9.5, I'm working on Windows 7.
I need to execute update querys routinely over a Postgres table. Is it possible to program an automatic execution of the query, let's say, every day at 15:00 hs? The Postgres version is 9.5, I'm working on Windows 7.
guillermo_dangelo
(175 rep)
Oct 31, 2019, 02:06 PM
• Last activity: Oct 31, 2019, 03:38 PM
-3
votes
2
answers
3649
views
Need a shell script that can run a oracle query and produce output in .csv format
Can you please help- need a shell script that will have a oracle query inside it and produce a .csv format output in specific location, that script should be a scheduled in a cron job which runs, the cron job should run on daily basis.
Can you please help- need a shell script that will have a oracle query inside it and produce a .csv format output in specific location, that script should be a scheduled in a cron job which runs, the cron job should run on daily basis.
Kilaru Krishna Chaitanya
(13 rep)
Sep 3, 2019, 06:45 AM
• Last activity: Sep 5, 2019, 03:35 AM
1
votes
3
answers
255
views
How to avoid server crash during backup of dabase using a crontab
I'm on shared hosting, so I want to avoid using too much system resources. I have a crontab that schedules a MySQL database backup. The database is about 2gb before compression. When that crontab runs, the system seems to crash, because the http server stops for a minute or 2. Is there anything that...
I'm on shared hosting, so I want to avoid using too much system resources.
I have a crontab that schedules a MySQL database backup. The database is about 2gb before compression. When that crontab runs, the system seems to crash, because the http server stops for a minute or 2.
Is there anything that I can add or edit to my command that would run the db backup job gradually or in smaller steps to avoid server crashes?
Here is the command that causes the problem:
mysqldump --add-drop-table --user=xxxxxxxxx --password='xxxxxxxxxxxx' db_name | gzip >/path/to/my/backup_directory/db_backup.dmp.gz
Any tips please? or should I ask my host to change any settings in my account?
Muhammad
(11 rep)
Oct 26, 2016, 10:32 AM
• Last activity: Aug 2, 2019, 08:02 PM
0
votes
1
answers
493
views
cron command running postgres at high cpu
I can see that a cron command is running under a postgres user and killing the CPU but I have no idea how it's being executed or how to stop it ! [![enter image description here][1]][1] [1]: https://i.sstatic.net/TSADX.png
I can see that a cron command is running under a postgres user and killing the CPU but I have no idea how it's being executed or how to stop it !

Storm
(121 rep)
Jul 17, 2019, 12:23 PM
• Last activity: Jul 17, 2019, 01:22 PM
Showing page 1 of 20 total questions