Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
6740 views
Runnig pg_cron jobs on different database and non-public schema says "No procedure matches the given name and argument types"
I'm running pg_cron jobs that of course runs from the default **postgres** db. I have some functions/procedures that I created in *another database* called **test**, and have ran assigned these pg_cron jobs to the correct new database via (example): ``` select cron.schedule ('some_random_function',...
I'm running pg_cron jobs that of course runs from the default **postgres** db. I have some functions/procedures that I created in *another database* called **test**, and have ran assigned these pg_cron jobs to the correct new database via (example):
select cron.schedule ('some_random_function',
                      '* * * * *',
                      'call test.my_random_funct()'
           );

UPDATE cron.job SET database = 'test';
I know this works because when I had the function/procedure in the public schema of the test database, everything worked. However, I have a another schema in the **test** database I want to use, called **poop**, but when I schedule the pg_cron job on this database and schema, all I get are error messages that the function/procedure doesn't exist (even though it does exist). Do I have to grant some permissions or something or some schema? I'm running under a sysadmin account so it should have all needed privileges... sample error message that shows up in cron.job_run_details:
ERROR: procedure my_rand_funct() does not exist
HINT: No procedure matches the given name and argument types. You might need to add explicit type casts.
PainIsAMaster (131 rep)
Oct 5, 2021, 12:03 AM • Last activity: Aug 1, 2025, 12:04 AM
0 votes
0 answers
25 views
How to send a mail notification if cron.job running is failed?
How can I configure AWS services to notify me of failed pg_cron job runs in the cron.job_run_details table using only the AWS Management Console or PostgreSQL settings, without writing any code?
How can I configure AWS services to notify me of failed pg_cron job runs in the cron.job_run_details table using only the AWS Management Console or PostgreSQL settings, without writing any code?
Balaji Ogiboina (1 rep)
Jul 11, 2025, 05:46 AM • Last activity: Jul 11, 2025, 07:15 AM
0 votes
0 answers
27 views
pg_cron Installed by Superuser Not Showing Records to Developers
We have pg_cron installed on our PostgreSQL 13 database. The cron schema and job table are visible to all users. When the admin user runs SELECT * FROM cron.job, it returns rows as expected. However, when any developer runs the same query, it returns 0 rows, despite the query executing successfully....
We have pg_cron installed on our PostgreSQL 13 database. The cron schema and job table are visible to all users. When the admin user runs SELECT * FROM cron.job, it returns rows as expected. However, when any developer runs the same query, it returns 0 rows, despite the query executing successfully. Why might developers be seeing an empty result set, and how can we resolve this issue to allow developers to see the job records?
Muhammad sami (1 rep)
Aug 6, 2024, 11:08 AM
0 votes
1 answers
116 views
are pg_cron and pgaudit database specific extensions? or is one installation per server enough?
I'm quite versed in postgresql, but new to any extension other than postgis. for example, one of the first things I do when setting up an instance is to install postgis in template1, so it's available in every db I create (most of them handle geospatial data). I'd like something similar for pg_cron...
I'm quite versed in postgresql, but new to any extension other than postgis. for example, one of the first things I do when setting up an instance is to install postgis in template1, so it's available in every db I create (most of them handle geospatial data). I'd like something similar for pg_cron and pgaudit, but I'm not sure it's required. are these extensions "one per cluster" or do they need installing in each separate database? I believe pgaudit is per server, as it records everything, but confirmation would be helpful. pg_cron I'm not so sure about. thanks all!
Chris (185 rep)
Jun 17, 2024, 05:45 PM • Last activity: Jun 17, 2024, 06:13 PM
3 votes
2 answers
678 views
pg_cron: "policy ... for table ... already exists" error while restore postgres database from dump
I'm trying to restore postgresql database from dump on fresh new server. Source database uses `pgcron` extension, target server have `pgcron` extension installed. This is how I create dumps on "source" server: pg_dumpall -h hostname -p 5435 -U myuser --roles-only | bzip2 -c -z > dump-role.sql.bz2 pg...
I'm trying to restore postgresql database from dump on fresh new server. Source database uses pgcron extension, target server have pgcron extension installed. This is how I create dumps on "source" server: pg_dumpall -h hostname -p 5435 -U myuser --roles-only | bzip2 -c -z > dump-role.sql.bz2 pg_dump -C -h hostname -p 5435 -U myuser mydatabase | bzip2 -c -z > dump-data.sql.bz2 This is how I restore dumps on empty "target" server: grep -Eiv '(CREATE ROLE postgres|ALTER ROLE postgres .*PASSWORD)' dump-role.sql | psql -Upostgres > /dev/null psql -Upostgres -f dump-data.sql > /dev/null *grep on the first command is used to left "postgres" superuser untouched on a target server.* When restoring from dump, I got 2 errors: psql:/path/to/dump-data.sql:18831791: ERROR: policy "cron_job_policy" for table "job" already exists psql:/path/to/dump-data.sql:18831798: ERROR: policy "cron_job_run_details_policy" for table "job_run_details" already exists They refers to the following lines of dump: -- Name: job cron_job_policy; Type: POLICY; Schema: cron; Owner: some_user CREATE POLICY cron_job_policy ON cron.job USING ((username = CURRENT_USER)); -- Name: job_run_details cron_job_run_details_policy; Type: POLICY; Schema: cron; Owner: some_user CREATE POLICY cron_job_run_details_policy ON cron.job_run_details USING ((username = CURRENT_USER)); I guess this is some pg_cron "self-made" structure, I never created those POLICY by hands on "source" database. Is there a way to fix this errors? Or maybe should I just ignore them?
AntonioK (133 rep)
Sep 13, 2023, 10:13 AM • Last activity: Feb 19, 2024, 09:06 PM
-1 votes
1 answers
461 views
Query timeout inside a cron job
I'm running a cron job using pg_cron. The job is running some cleanup queries, that occasionally takes longer to complete. When that happens, the job fails on `statement timeout` of 2 min which is the default `ERROR: canceling statement due to statement timeout` I'd like to increase the `statement_t...
I'm running a cron job using pg_cron. The job is running some cleanup queries, that occasionally takes longer to complete. When that happens, the job fails on statement timeout of 2 min which is the default ERROR: canceling statement due to statement timeout I'd like to increase the statement_timeout for this job.
set statement_timeout='1200s'
select do_cleanup_tasks()
What would be the scope of this statement timeout change? Will it effect only the execution of this transaction, or it has a wider effect?
arik alon (1 rep)
Dec 27, 2023, 11:42 AM • Last activity: Dec 27, 2023, 06:35 PM
1 votes
1 answers
1325 views
pg_cron timezone issue, unable to run at specified time
Unable to run at proper timezone. I want to run two things 1. bash scripts 2. Stored procedure For both case I found it runs at the wrong time. Instead of 1 AM and 3 AM `pg_cron` runs at a different time. Why could that be? At PostgreSQL timezone +6 `pg_cron` log also show that it runs at +6 timezon...
Unable to run at proper timezone. I want to run two things 1. bash scripts 2. Stored procedure For both case I found it runs at the wrong time. Instead of 1 AM and 3 AM pg_cron runs at a different time. Why could that be? At PostgreSQL timezone +6 pg_cron log also show that it runs at +6 timezone but actually it runs at 9PM (21:00). How actually pg_cron get that time? SELECT cron.schedule('00 15 * * * ','/main/backup/archive.sh'); SELECT cron.schedule('00 15 * * *', $$ call archiver( 10, 'tbl_main','tbl_archiver')$$); **Note:** Though I found syntax error for script running but it is not main issue for me.
Sheikh Wasiu Al Hasib (283 rep)
Sep 18, 2022, 03:57 AM • Last activity: Dec 5, 2023, 01:01 PM
0 votes
0 answers
1652 views
pg_cron jobs are not being executed
We have installed **pg_cron** extension on PostgreSQL 13 database. `postgresql.conf` file: ``` shared_preload_libraries = 'pg_cron' cron.database_name = 'pg_cron_db_name' ``` Also we enabled trust authentication for connections coming from localhost in `pg_hba.conf`: ``` #IPv4 local connections: hos...
We have installed **pg_cron** extension on PostgreSQL 13 database. postgresql.conf file:
shared_preload_libraries = 'pg_cron'
cron.database_name = 'pg_cron_db_name'
Also we enabled trust authentication for connections coming from localhost in pg_hba.conf:
#IPv4 local connections:
host   db_name    postgres,pg_cron_user    127.0.0.1/32         trust
host   all        all                      0.0.0.0/0            md5
We have run this query from postgres user to schedule a simple job for a database db_name (other than the one **pg_cron** was installed in):
-sql
SELECT cron.schedule_in_database('job1', '5,10,20,30 17 * * *', $$SELECT 1$$, 'db_name');
Everythings works fine on our staging database. \ But on our production DB the job is never executed. Table cron.job_run_details is empty and there are no entries in a logfile (even error ones).\ Versions of production and staging DBs are the same. We performed identical steps to set up **pg_cron**. The only difference we can think of is that staging DB is running on on-premise server while production DB is running on Azure's virtual machine. We tried to switch to unix domain socket to allow connections from localhost by running:
-sql
UPDATE cron.job SET nodename = '' WHERE jobid = ;
This helped, but temporarily. Job has been successfully executed three times and after that stopped to run again. Any ideas on how to solve this problem are appreciated.
Lesya Klachko (1 rep)
Sep 11, 2023, 03:13 PM
3 votes
2 answers
5625 views
Use pg_cron to run VACUUM on multiple tables
Reading the pg_cron [documentation][1] I saw that the examples only execute a command when scheduling a task. In a StackOverflow [post][2] I saw that a user tried to run multiple VACUUM when scheduling the task, but an error occurs. Is there a way to run VACUUM on multiple tables in sequence using p...
Reading the pg_cron documentation I saw that the examples only execute a command when scheduling a task. In a StackOverflow post I saw that a user tried to run multiple VACUUM when scheduling the task, but an error occurs. Is there a way to run VACUUM on multiple tables in sequence using pg_cron? There are about 112 selected tables that must be vacuumed, out of a total of 155, so scheduling a task for each one is not very practical. Or for example, delete old records from a table and immediately at the end of the process run cron on selected tables?
Tom (438 rep)
Nov 24, 2021, 02:07 AM • Last activity: Jul 18, 2023, 09:27 PM
0 votes
0 answers
565 views
pg_cron is dramatically faster than running the same query manually
I have a simple table with 75 million rows (and roughly 47 million unique ref_ids) ``` CREATE TABLE tbl ( id serial NOT NULL PRIMARY KEY, ref_id text NOT NULL, ts timestamp NOT NULL ); ``` And I am trying to run the following query on the table (essentially, delete rows where there is an existing ro...
I have a simple table with 75 million rows (and roughly 47 million unique ref_ids)
CREATE TABLE tbl (
	id serial NOT NULL PRIMARY KEY,
	ref_id text NOT NULL,
	ts timestamp NOT NULL
);
And I am trying to run the following query on the table (essentially, delete rows where there is an existing row with the same ref_id and a newer ts :
DELETE FROM tbl t 
WHERE EXISTS (
	SELECT 1 FROM tbl t2 WHERE t.id < t2.id AND t.ref_id = t2.ref_id AND t.ts <= t2.ts
)
If I try to run this query manually (i.e. via PGAdmin), it will run for several hours before I give up on it. However, if I put this query in a procedure and schedule it to run 5 minutes later via pg_cron, it will complete in ~70 seconds. There is no traffic to this table during either the manual or cron runs. To confirm, a count of the rows after the pg_cron run shows 47 million rows, where before the run it had 75. I have also experienced similar behavior when running other queries, and I am trying to figure out what could be causing such a massive performance difference between pg_cron and manually running queries. I am running postgres 14.4.
perennial_ (101 rep)
Jun 6, 2023, 03:32 AM
3 votes
2 answers
9971 views
"connection failed" error for pg_cron extension
Installed `pg_cron` and scheduled job, though there is connection error because job is not done and `cron.job_run_details` table shows `connection failed` error message. As [doc][1] says: > Important: Internally, pg_cron uses libpq to open a new connection to > the local database. It may be necessar...
Installed pg_cron and scheduled job, though there is connection error because job is not done and cron.job_run_details table shows connection failed error message. As doc says: > Important: Internally, pg_cron uses libpq to open a new connection to > the local database. It may be necessary to enable trust authentication > for connections coming from localhost in pg_hba.conf for the user > running the cron job. Alternatively, you can add the password to a > .pgpass file, which libpq will use when opening a connection. Created file .pgpass with needed parameters, but not sure how to "tell" to libpq that for this user, for this host and DB read this .pgpass file. Any help appreciated.
Oto Shavadze (575 rep)
Aug 11, 2021, 09:06 AM • Last activity: Dec 9, 2022, 12:19 PM
2 votes
1 answers
1929 views
Restart master in physical replication
I have 2 servers involving `master-slave` replication (physical replication). I see the replication using `select * from pg_replication_slots;` there is a record saying : user : rep_user` application : 12/main state : streaming I have to change `postgresql.conf` on the master to enable `pg_cron` ext...
I have 2 servers involving master-slave replication (physical replication). I see the replication using select * from pg_replication_slots; there is a record saying : user : rep_user` application : 12/main state : streaming I have to change postgresql.conf on the master to enable pg_cron extension so I need to restart the master. Is it safe to just restart it using : sudo service postgresql stop followed by sudo service postgresql start ? Do I have to drop the replication first before restarting? thanks
padjee (337 rep)
Dec 5, 2022, 06:02 AM • Last activity: Dec 9, 2022, 03:21 AM
1 votes
1 answers
2816 views
How to deal with 'failed' pg_cron jobs (postgres)
I have some Jobs (postgres plpgsql procedures) that I am scheduling using pg_cron library/extension and cron expressions. I have some scripts that have failed and thus logged the 'failed' status column in the cron.job_run_details table. The problem is that the Job will try to re-try, even though the...
I have some Jobs (postgres plpgsql procedures) that I am scheduling using pg_cron library/extension and cron expressions. I have some scripts that have failed and thus logged the 'failed' status column in the cron.job_run_details table. The problem is that the Job will try to re-try, even though there it will fail again, which causes necessary load on my DB. My main database tables are of course on a different database than the default **postgres** db that pg_cron uses, thus, I cannot query the cron.job_run_details table (cross db). I wanted to probably query this table is an entry for my job name existed with a 'failed' status. What is the best way to deal with these failures? Since the whole point of pg_cron is to automate, I need a way to have these scripts running and if they fail, to somehow resolve themselves without manual intervention.
PainIsAMaster (131 rep)
Oct 5, 2021, 02:41 PM • Last activity: Oct 20, 2021, 12:00 PM
0 votes
0 answers
962 views
Schedule Postgres pg_cron jobs to trigger by something other than Cron expression
I have some pg_cron jobs that run at night time. They basically run some plpgsql Procedures and trigger based off CRON expressions. The problem is that some of my JOBS are dependent on the jobs that run before them. I have been dealing with that by inserting a record into a utility table I have crea...
I have some pg_cron jobs that run at night time. They basically run some plpgsql Procedures and trigger based off CRON expressions. The problem is that some of my JOBS are dependent on the jobs that run before them. I have been dealing with that by inserting a record into a utility table I have created, but it's annoying to me that even the 1st job runs and inserts this record, the other job is constantly re-triggering because of the CRON expression. I do indeed have a conditional statement in the 2nd and subsequent jobs so they don't run if the 1st job didn't insert its record into the utility table, but I don't see to find any way to completely outright prevent the other jobs from running. Also, I don't know when the 1st job will end, so if I technically set a specific CRON time, the 1st job might not be done, and the 2nd will never run (which I don't want). Ideally, there would be someway for me to link the jobs together (if 1st job runs and completes, run the 2nd) but I seem to only be able to set expressions like (* * * * *), where basically the job has to repeatedly check all day. Is this the best approach? Is there an alternative to setting Cron expressions for the pg_cron jobs or a way to link jobs?
PainIsAMaster (131 rep)
Oct 9, 2021, 06:03 AM
0 votes
1 answers
316 views
Deleting records from two-child tables in parallel (concurrently)
I have a postgres database where I am using the pg_cron extension to automatically perform some database "maintenance" at night time - Specifically, my database has some "offers" in it with timestamps, so I move those records to archive tables if they are "expired", and then delete the original reco...
I have a postgres database where I am using the pg_cron extension to automatically perform some database "maintenance" at night time - Specifically, my database has some "offers" in it with timestamps, so I move those records to archive tables if they are "expired", and then delete the original records from my primary tables. Anyway, I have two-child tables that both have Foreign Keys that are referencing the Primary Key of the parent-table. Maybe this diagram will help: enter image description here pg_cron lets you use these "background_workers", which from my understanding of being a Java Developer, are like Threads and basically let you run SQL scripts (i.e. functions/procedures) in parallel/concurrently. Thus, I was trying to leverage this and perform as many of my INSERTS/DELETES scripts concurrently (if possible). The problem I'm having now is that when my Cron job triggers some of my DELETE scripts ("script" == Postgres plpgsql procecure) to run for these three tables, I have it coded such that: 1. the child tables delete BEFORE parent (but they can run in-any order AND concurrently) 2. the parent tables deletes last (ALWAYS) As far as I know, this avoids any weird FK-exception errors (like trying to delete from the Parent table first). Anyway, because I am using the background workers, both the child-table delete scripts CAN AND WILL RUN CONCURRENTLY (as I stated above). I thought this was OK, but I see now, one of the scripts will run a delete on one of the child tables just fine - but whichever child table script runs slightly after, ends up never completing (the Job just shows "running" in pg_cron.job_run_details table). ***Am I not able to delete concurrently*** from the child-tables that contain a FK reference to the PK in the same parent-table ? As I depicted on the diagram, the FK properties for both tables are: MATCH FULL ON DELETE NO ACTION ON UPDATE NO ACTION Also FYI, I did test altering the FK constraints and set the ON DELETE to "CASCADE" so I can only need 1 plpgsql Procedure but the damn Delete was too damn slow (if anyone has any tips on this maybe I can go back...) Any help greatly appreciated!
PainIsAMaster (131 rep)
Sep 1, 2021, 12:37 AM • Last activity: Sep 1, 2021, 03:23 AM
Showing page 1 of 15 total questions