Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
6740 views
Runnig pg_cron jobs on different database and non-public schema says "No procedure matches the given name and argument types"
I'm running pg_cron jobs that of course runs from the default **postgres** db. I have some functions/procedures that I created in *another database* called **test**, and have ran assigned these pg_cron jobs to the correct new database via (example): ``` select cron.schedule ('some_random_function',...
I'm running pg_cron jobs that of course runs from the default **postgres** db. I have some functions/procedures that I created in *another database* called **test**, and have ran assigned these pg_cron jobs to the correct new database via (example):
select cron.schedule ('some_random_function',
                      '* * * * *',
                      'call test.my_random_funct()'
           );

UPDATE cron.job SET database = 'test';
I know this works because when I had the function/procedure in the public schema of the test database, everything worked. However, I have a another schema in the **test** database I want to use, called **poop**, but when I schedule the pg_cron job on this database and schema, all I get are error messages that the function/procedure doesn't exist (even though it does exist). Do I have to grant some permissions or something or some schema? I'm running under a sysadmin account so it should have all needed privileges... sample error message that shows up in cron.job_run_details:
ERROR: procedure my_rand_funct() does not exist
HINT: No procedure matches the given name and argument types. You might need to add explicit type casts.
PainIsAMaster (131 rep)
Oct 5, 2021, 12:03 AM • Last activity: Aug 1, 2025, 12:04 AM
1 votes
1 answers
171 views
Filter a SELECT query with predicates passed as single hstore value
For an API I am building I am supposed to write a function with a single `hstore` argument containing `colname => value` pairs so that queries may be filtered with a `WHERE` clause including an indefinite amount of subclauses. I currently have the following function definition (not a MWE): ``` CREAT...
For an API I am building I am supposed to write a function with a single hstore argument containing colname => value pairs so that queries may be filtered with a WHERE clause including an indefinite amount of subclauses. I currently have the following function definition (not a MWE):
CREATE OR REPLACE FUNCTION api.func( conds HSTORE )
RETURNS TABLE (LIKE api.tab) AS 
$$
BEGIN
	RETURN QUERY
		SELECT * 
		FROM api.tab 
		WHERE -- conds here
        LIMIT 25 ;
END;
$$ LANGUAGE plpgsql;
I am not really sure how to continue. Could someone please give me a hint?
eslukas (111 rep)
Aug 8, 2022, 11:45 AM • Last activity: Jul 18, 2025, 09:06 PM
2 votes
1 answers
1811 views
How to specify with a parameter a field to extract from a RECORD variable
Here's a table: CREATE TABLE t_heights (id INT, height REAL); INSERT INTO t_heights VALUES (1,53.63), (2,45.19), (3,47.06); Reading through the table rows with a FOR...LOOP and a RECORD type variable like this works: CREATE OR REPLACE FUNCTION fnct_row_by_row (input_table regclass) RETURNS VOID LANG...
Here's a table: CREATE TABLE t_heights (id INT, height REAL); INSERT INTO t_heights VALUES (1,53.63), (2,45.19), (3,47.06); Reading through the table rows with a FOR...LOOP and a RECORD type variable like this works: CREATE OR REPLACE FUNCTION fnct_row_by_row (input_table regclass) RETURNS VOID LANGUAGE plpgsql AS $$ DECLARE current_row RECORD; BEGIN FOR current_row IN EXECUTE 'SELECT * FROM '||input_table LOOP RAISE NOTICE 'field value: %',current_row.height; -- *<<< hardcoded field name* END LOOP; END; $$ ; Results: SELECT fnct_row_by_row ('t_heights') ; NOTICE: field value: 53.63 NOTICE: field value: 45.19 NOTICE: field value: 47.06 However, the field extracted from the RECORD variable needs to have its name hardcoded. **How to specify dynamically the field to be extracted?** The following generates an error: CREATE OR REPLACE FUNCTION fnct_row_by_row2 (input_table regclass, input_field_name TEXT) RETURNS VOID LANGUAGE plpgsql AS $$ DECLARE current_row RECORD; BEGIN FOR current_row IN EXECUTE 'SELECT * FROM '||input_table LOOP RAISE NOTICE 'specified field: %',current_row.input_field_name; -- *<<< field name from parameter* END LOOP; END; $$ ; Execution: SELECT fnct_row_by_row2 ('t_heights','height') ; ERROR: record "current_row" has no field "input_field_name" CONTEXTE : SQL statement "SELECT current_row.input_field_name" PL/pgSQL function fnct_row_by_row2(regclass,text) line 10 at RAISE
S&#233;bastien Cl&#233;ment (1825 rep)
Jan 24, 2018, 09:58 PM • Last activity: Jul 7, 2025, 07:05 PM
0 votes
1 answers
176 views
using PERFORM to insert a string of SELECT statement into a temp table
I am trying to insert data into a temp_table and then truncating the table after analyzing the result. Here is my code: CREATE OR REPLACE FUNCTION validation() RETURNS text AS $$ DECLARE counter INTEGER; DECLARE minsid INTEGER; DECLARE maxsid INTEGER; DECLARE rec RECORD; DECLARE stmt varchar; BEGIN...
I am trying to insert data into a temp_table and then truncating the table after analyzing the result. Here is my code: CREATE OR REPLACE FUNCTION validation() RETURNS text AS $$ DECLARE counter INTEGER; DECLARE minsid INTEGER; DECLARE maxsid INTEGER; DECLARE rec RECORD; DECLARE stmt varchar; BEGIN SELECT MIN(sid) INTO minsid FROM staging.validation; SELECT MAX(sid) INTO maxsid FROM staging.validation; CREATE TEMPORARY TABLE temp_table (col1 TEXT, col2 INTEGER, col3 BOOLEAN) ON COMMIT DROP; FOR counter IN minsid..maxsid LOOP RAISE NOTICE 'Counter: %', counter; SELECT sql INTO stmt FROM staging.validation WHERE sid = counter; RAISE NOTICE 'sql: %', stmt; PERFORM 'INSERT INTO temp_table (col1, col2, col3) ' || stmt; IF temp_table.col3 = false THEN RAISE NOTICE 'there is a false value'; END IF; END LOOP; END; $$ LANGUAGE plpgsql; Whenever I run this function
* FROM validation();
I get an error:
: missing FROM-clause entry for table "temp_table" Where: PL/pgSQL function validation() line 21 at IF
Here is how my staging.validation table looks - https://docs.google.com/spreadsheets/d/1bXO9gqFS-GtcC1qJtgNbFkR6ygOuPtR_RZoU7VNhgrI/edit?usp=sharing
hky404 (313 rep)
Sep 15, 2017, 06:28 PM • Last activity: Jun 29, 2025, 04:08 AM
0 votes
1 answers
3603 views
Stored Procedure for Vacuum in PostgreSQL
I want to create a stored procedure to trigger vacuum, I know there are many ways to automate vacuum, but I would like to use the stored procedure (Postgresql 9.6). The problem Im facing is, I can't run the vacuum inside the `BEGIN/END` so there is a workaround, we can use `set autocommit=on`. But w...
I want to create a stored procedure to trigger vacuum, I know there are many ways to automate vacuum, but I would like to use the stored procedure (Postgresql 9.6). The problem Im facing is, I can't run the vacuum inside the BEGIN/END so there is a workaround, we can use set autocommit=on. But when I add this autocommit inside the stored procedure, its throwing error. ### Sample SP:
create procedure test(table_list varchar(1000))
LANGUAGE plpgsql 
AS 
$$ 
DECLARE 
*** some values ***
BEGIN 
EXECUTE 'set autocommit=on';
*** Rest of the command to get the table names one by one and vacuum in a loop***
END;
$$
Is this possible? Or any alternate way?
TheDataGuy (1986 rep)
Apr 11, 2020, 08:27 PM • Last activity: Jun 9, 2025, 04:04 PM
1 votes
1 answers
481 views
Loop Block with execution doesn't persist
I need to iterate over a result set and update records from the data. I am using an anonymous block but the information does not persist. Like this... do $$ declare dados Record; begin for dados in select vd."Id" from "Vendas" vd inner join "ItensVendas" iv on iv."VendaId" = vd."Id" where vd."Especi...
I need to iterate over a result set and update records from the data. I am using an anonymous block but the information does not persist. Like this... do $$ declare dados Record; begin for dados in select vd."Id" from "Vendas" vd inner join "ItensVendas" iv on iv."VendaId" = vd."Id" where vd."Especie" = 'NFE' and coalesce(vd."ClienteId", 0) = 0 and iv."CodCFOP" in (5949, 5927) group by vd."ChaveNFE", vd."Id", vd."DataCadastro" loop execute format('update "Vendas" set "EhConsumoProprio" = true where "Id" = %L', dados."Id"); end loop; end $$; And again... do $$ declare dados Record; declare cmd varchar(300); begin for dados in select vd."ChaveNFE", vd."Id", vd."DataCadastro" from "Vendas" vd inner join "ItensVendas" iv on iv."VendaId" = vd."Id" where vd."Especie" = 'NFE' and coalesce(vd."ClienteId",0)= 0 and iv."CodCFOP" in (5949, 5927) group by vd."ChaveNFE", vd."Id", vd."DataCadastro" loop raise notice '%',dados."Id"; update "Vendas" set "EhConsumoProprio"=true where "Id"=dados."Id"; -- It don't persists cmd := format('update "Vendas" set "EhConsumoProprio"=true where "Id"=%L', dados."Id"); raise notice '%', cmd; execute cmd; -- It don't persists end loop; end $$;
Jones Rom&#227;o (11 rep)
Mar 15, 2019, 01:10 PM • Last activity: May 29, 2025, 04:04 AM
1 votes
1 answers
2452 views
Use variable as column name
I'm trying to create trigger in PostgreSQL 14: ``` create or replace function add_stats_to_player() returns trigger as $add_stats_to_player$ declare equipment_id int := new.player_equipment_armor['armor_id']; equipment_stats varchar[] := array(select jsonb_object_keys(armor_stats['stats']) from equi...
I'm trying to create trigger in PostgreSQL 14:
create or replace function add_stats_to_player() returns trigger as $add_stats_to_player$
    declare
        equipment_id int := new.player_equipment_armor['armor_id'];
        equipment_stats varchar[] := array(select jsonb_object_keys(armor_stats['stats']) from equipments_armor where armor_id = equipment_id);
        equipment_stat varchar;

    begin
        raise notice '%', equipment_id;
        foreach equipment_stat in array equipment_stats loop
                if old.player_equipment_armor['stats'][equipment_stat] is not null then
                     update players set equipment_stat = equipment_stat - old.player_equipment_armor['stats'][equipment_stat] + new.player_equipment_armor['stats'][equipment_stat] where player_id = new.player_id;
                end if;
        end loop;
        return new;
    END;
$add_stats_to_player$ language plpgsql;

create trigger add_stats_to_player after insert or update of player_equipment_armor on players
    for each row WHEN (pg_trigger_depth() < 1) execute function add_stats_to_player();
Can I make a column name from the variable equipment_stat, or is that a bad idea? **update** Here's how I did it
create or replace function add_stats_to_player() returns trigger as $add_stats_to_player$
    declare
        equipment_id int := new.player_equipment_armor['armor_id'];
        equipment_stats varchar[] := array(select jsonb_object_keys(armor_stats['stats']) from equipments_armor where armor_id = equipment_id);
        equipment_stat varchar;
    begin
        if (TG_OP = 'UPDATE') then
            foreach equipment_stat in array equipment_stats loop
                    if old.player_equipment_armor['stats'][equipment_stat] is not null then
                       execute 'update players set ' || equipment_stat || ' = ' || equipment_stat || ' - ' || old.player_equipment_armor['stats'][equipment_stat] || '+' || new.player_equipment_armor['stats'][equipment_stat] || ' where player_id = ' || new.player_id;
                    else
                       execute 'update players set ' || equipment_stat || ' = ' || new.player_equipment_armor['stats'][equipment_stat] || ' where player_id = ' || new.player_id;
                    end if;
            end loop;
        elseif  (TG_OP = 'INSERT') then
            foreach equipment_stat in array equipment_stats loop
                execute 'update players set ' || equipment_stat || ' = ' || new.player_equipment_armor['stats'][equipment_stat] || ' where player_id = ' || new.player_id;
            end loop;
        end if;
        return new;
    end;
$add_stats_to_player$ language plpgsql;

create trigger add_stats_to_player after insert or update of player_equipment_armor on players
    for each row WHEN (pg_trigger_depth() < 1) execute procedure add_stats_to_player();
Sevas (11 rep)
May 7, 2022, 01:34 PM • Last activity: May 9, 2025, 07:04 PM
1 votes
1 answers
3433 views
Bulk Inserts in Postgres
This is with respect to Data Migration activity where the historical data from the client database is migrated to vendor Postgres Database. There will be millions of transactions that need to be migrated as the Big Bang approach. In oracle database, I used to use the below template of code for migra...
This is with respect to Data Migration activity where the historical data from the client database is migrated to vendor Postgres Database. There will be millions of transactions that need to be migrated as the Big Bang approach. In oracle database, I used to use the below template of code for migration - create or replace PROCEDURE PRC_TEST AS DECLARE CURSOR CUR IS SELECT ID,NAME FROM test; TYPE test_typ IS TABLE OF CUR%ROWTYPE INDEX BY PLS_INTEGER; test_tbl test_typ; BEGIN OPEN CUR; LOOP FETCH cur BULK COLLECT INTO test_tbl LIMIT 1000; DBMS_OUTPUT.PUT_LINE(test_tbl.COUNT); FORALL I IN 1..test_tbl.COUNT --SAVE EXCEPTIONS INSERT INTO test1(ID,NAME) VALUES ( test_tbl(I).id, test_tbl(I).name ); FORALL I IN 1..test_tbl.COUNT UPDATE test1 SET name = name||test_tbl(I).NAME WHERE id =test_tbl(I).id; DBMS_OUTPUT.PUT_LINE('AFTER'|| test_tbl.COUNT); EXIT WHEN CUR%NOTFOUND ; END LOOP; commit; close cur; EXCEPTION WHEN OTHERS THEN FOR I IN 1 .. SQL%BULK_EXCEPTIONS.COUNT LOOP dbms_output.put_line('error'||sqlerrm); END LOOP; END; End PRC_TEST; Is there a plpgsql equivalent code available for the same? What approach to be used in Postgres while handling such migration activity ? please provide some best practices to be followed for better performance and handling/storing the error records in Postgres. Thanks..!
cpb (11 rep)
Jul 28, 2020, 09:07 AM • Last activity: May 5, 2025, 07:00 AM
1 votes
2 answers
5724 views
Trigger function does not exist, but I am pretty sure it does
I have a procedure: ```plpgsql create or replace procedure pro_update_last_read(in input_sensor_id integer, in read_time timestamp) as $$ begin update sensor set last_read = read_time where sensor_id = input_sensor_id; end; $$ language plpgsql; ``` And a trigger that calls it: ``` create trigger tri...
I have a procedure:
create or replace procedure pro_update_last_read(in input_sensor_id integer, in read_time timestamp)
as
$$
begin
    update sensor
    set last_read = read_time
    where sensor_id = input_sensor_id;
end;
$$ language plpgsql;
And a trigger that calls it:
create trigger tri_last_read
    after insert or update of report_time
    on report
execute procedure pro_update_last_read(sensor_id, report_time);
However, when creating the trigger, I get the error message: > ERROR: function pro_update_last_read() does not exist Why is this error happening?
Cygnini - ShadowRi5ing (13 rep)
Oct 16, 2019, 02:38 AM • Last activity: Apr 30, 2025, 04:44 AM
0 votes
1 answers
3553 views
How to create multiple temp tables using records from a CTE that I need to call multiple times in Postgres plpgsql Procedure?
I am already using a CTE expression within a plpgsql Procedure to grab some Foreign Keys from (1) specific table, we can call it `master_table`. I created a brand new table, we can call this table `table_with_fks`, in my DDL statements so this table holds the FKs I am fetching and saving. I later ta...
I am already using a CTE expression within a plpgsql Procedure to grab some Foreign Keys from (1) specific table, we can call it master_table. I created a brand new table, we can call this table table_with_fks, in my DDL statements so this table holds the FKs I am fetching and saving. I later take these FKs from my table_with_fks and JOIN on my other tables in my database to get the entire original record (the full record with all columns from its corresponding table) and insert it into an archive table. I have an awesome lucid chart I drew that might make what I say down below make much more sense: enter image description here My CTE example:
LOOP
   EXIT WHEN some_condition;

WITH fk_list_cte AS (
  SELECT mt.fk1, mt.fk2, mt.fk3, mt.fk4
  FROM master_table mt
  WHERE mt.created_date = 1 year old
  LIMIT 10000
)
INSERT INTO table_with_fks (SELECT * FROM fk_list_cte);

commit;
END LOOP;
Now, I have (4) other Procedures that JOIN on each FK in this table_with_fks with its **parent** table that it references. I do this because as I said, I only got the FK at first, and I don't have all the original columns for the record. So I will do something like
LOOP
   EXIT WHEN some_condition;

WITH full_record_cte AS (
  SELECT * 
  FROM table_with_fks fks
  JOIN parent_table1 pt1
  ON fks.fk1 = pt1.id
  LIMIT 10000),
  INSERT INTO (select * from full_record_cte);

commit;
END LOOP;
***NOW***, what I want to do, is instead of having to RE-JOIN 4 times later on these FK's that are found in my table_with_fks, I want to use the first CTE fk_list_cte to JOIN on the parent tables right away and grab the full record from each (4) tables and put it in some TEMP postgres table. I think I will need (4) unique TEMP tables, as I don't know how it would work if I combine all their data into one BIG table, because each table has different data/different columns. Is there a way to use the original CTE fk_list_cte and call it multiple times in succession and CREATE 4 TEMP tables right after, that all use the original CTE? example:
LOOP
   EXIT WHEN some_condition;

WITH fk_list_cte AS (
  SELECT mt.fk1, mt.fk2, mt.fk3, mt.fk4
  FROM master_table mt
  WHERE mt.created_date = 1 year old
  LIMIT 10000
),
WITH fetch_fk1_original_record_from_parent AS (
  SELECT * 
  FROM fk_list_cte cte
  JOIN parent_table1 pt1
  ON  cte.fk1 = pt1.id
),
WITH fetch_fk2_original_record_from_parent AS (
  SELECT * 
  FROM fk_list_cte cte
  JOIN parent_table2 pt2
  ON  cte.fk2 = pt2.id
),
WITH fetch_fk3_original_record_from_parent AS (
  SELECT * 
  FROM fk_list_cte cte
  JOIN parent_table3 pt3
  ON  cte.fk3 = pt3.id
),
WITH fetch_fk4_original_record_from_parent AS (
  SELECT * 
  FROM fk_list_cte cte
  JOIN parent_table4 pt4
  ON  cte.fk4 = pt4.id
),
CREATE TEMPORARY TABLE fk1_tmp_tbl AS (
  SELECT * 
  FROM fetch_fk1_original_record_from_parent
)
CREATE TEMPORARY TABLE fk2_tmp_tbl AS (
  SELECT * 
  FROM fetch_fk2_original_record_from_parent
)
CREATE TEMPORARY TABLE fk3_tmp_tbl AS (
  SELECT * 
  FROM fetch_fk3_original_record_from_parent
)
CREATE TEMPORARY TABLE fk4_tmp_tbl AS (
  SELECT * 
  FROM fetch_fk4_original_record_from_parent
);

END LOOP;
I know the 4 CREATE TEMPORARY TABLE statements definitely won't work, (can I create 4 temp tables simultaneously/at once?) . Does anyone see the logic of what I am trying to do here and can help me?
PainIsAMaster (131 rep)
Oct 9, 2021, 04:01 AM • Last activity: Apr 7, 2025, 12:15 AM
1 votes
1 answers
65 views
Do I need to CREATE LANGUAGE plpgsql in PostgreSQL 9+
I have a PHP web application using a PostgreSQL database. Ideally, to install this web application should be easy and only require a database name and user. Then, the web application loads an SQL file and imports the tables, indices, and functions. Functions use the `plpgsql` language. ### PostgreSQ...
I have a PHP web application using a PostgreSQL database. Ideally, to install this web application should be easy and only require a database name and user. Then, the web application loads an SQL file and imports the tables, indices, and functions. Functions use the plpgsql language. ### PostgreSQL 8.4+ At first, the minimum required PostgreSQL version was 8.4. In order to avoid the #102 error language "plpgsql" does not exist, I was using this at the top of the SQL file:
--
-- Name: create_language_plpgsql(); Type: FUNCTION; Schema: public; Owner: postgres
--

CREATE FUNCTION create_language_plpgsql()
RETURNS BOOLEAN AS $$
    CREATE LANGUAGE plpgsql;
    SELECT TRUE;
$$ LANGUAGE SQL;

SELECT CASE WHEN NOT (
    SELECT TRUE AS exists FROM pg_language
    WHERE lanname='plpgsql'
    UNION
    SELECT FALSE AS exists
    ORDER BY exists DESC
    LIMIT 1
) THEN
    create_language_plpgsql()
ELSE
    FALSE
END AS plpgsql_created;

DROP FUNCTION create_language_plpgsql();
This will create the plpgsql language if it does not exist. ### PostgreSQL 9+ Now, I want to drop PostgreSQL 8.4 support. Judging from the wiki: https://wiki.postgresql.org/wiki/CREATE_OR_REPLACE_LANGUAGE > For PostgreSQL 9.0 and newer [...] "CREATE OR REPLACE LANGUAGE" is the native syntax for installing a procedural language with no error if it's already installed. So I thought I could replace the above SQL code with
CREATE OR REPLACE LANGUAGE plpgsql;
But when I run this SQL query on a database created on cPanel, I have the following error: ERROR: must be owner of language plpgsql. Indeed, on cPanel, plpgsql exists, but the user is not the owner of the plpgsql language. ### Question Should I rollback to the old SQL code which was working fine on both PostgreSQL 8.4 and 9+? Judging from this link, https://www.postgresql.org/docs/9.0/plpgsql-overview.html > In PostgreSQL 9.0 and later, PL/pgSQL is installed by default. However it is still a loadable module, so especially security-conscious administrators could choose to remove it. Or can I be safe without checking if the plpgsql language exists at all? And assume plpgsql will _always_ be available?
Fran&#231;ois J. (21 rep)
Mar 26, 2025, 05:42 PM • Last activity: Mar 26, 2025, 07:15 PM
2 votes
0 answers
23 views
Greenplum/PostgreSQL: Querries run individually, but won't run in plpgsql function
We are running Greenplum 6.28.1, which is built on PostgreSQL 9.4.26. We have a master node and 6 server nodes, all running Oracle Linux release 8.10. We have 5 queries that progressively build temp tables that are use to generate a final result set. Those queries, in the order they are run, are: ``...
We are running Greenplum 6.28.1, which is built on PostgreSQL 9.4.26. We have a master node and 6 server nodes, all running Oracle Linux release 8.10. We have 5 queries that progressively build temp tables that are use to generate a final result set. Those queries, in the order they are run, are:
drop table if exists hourly_time_windows;
create temp table hourly_time_windows as (
	WITH
        time_windows as (
            select 
                (generate_series(0, 23, 1)::text || ' Hours'::text)::interval as window_start_interval,
                (generate_series(1, 24, 1)::text || ' Hours'::text)::interval as window_end_interval
            ),
        dates_and_users as (
			select distinct
					activity_date,
					career_account
			from network_logs_processed.hourly_internet_activity_building log
			where activity_date >= '2024-02-01'::date and activity_date  0
			THEN 1
			ELSE 0 
			END) as device_moved_buildings,
		sum(megabytes_transferred) as megabytes_transferred
	from 
		ip.hourly_activity_with_building_lag data
	left outer join 
		(  
            select schedule.*, banner.career_account from utility_features.student_class_schedule_detail schedule
            left outer join banner_oltp.banner_lookup banner 
                on schedule.person_uid = banner.person_uid::numeric
		) class
		on 
			data.activity_date = class.calendar_date
			and data.career_account = class.career_account
			and data.building = class.building 
			and (
					(data.session_start_in_hour between class.begin_time and class.end_time)
					OR
					(data.session_end_in_hour between class.begin_time and class.end_time)
					OR 
					(data.session_end_in_hour > class.end_time and data.session_start_in_hour  0 and
	            campus in ('PWL','CEC') and
	            schedule_type not in ('IND','RES','EX') and
	            substring(course_identification from '[a-zA-Z]+#"[0-9]#"%' for '#')::int <= 4 and   --undergrad courses only
	            sub_academic_period in ('1','F8','FHS','M1','M2','M3','M12','M23','S8','SHS') and
	            registration_status like 'R%'
            ),
        housed_students as (
    		select distinct academic_period, person_uid
		    from utility_features.resident_student_room_details
            ),
        full_student_list as (
    		select academic_period, person_uid::numeric from registered_students
    		UNION
    		select academic_period, person_uid::numeric from housed_students
            )
    select 
    	full_student_list.academic_period, 
    	full_student_list.person_uid, 
    	banner.career_account 
    from 
    	full_student_list
    left outer join banner_oltp.banner_lookup banner
    	on full_student_list.person_uid = banner.person_uid::numeric
) distributed by (career_account);
drop table if exists aggregated_hourly_data;
create temp table aggregated_hourly_data as (
	select
		hourly_time_windows.career_account,
		banner.puid,
		banner.person_uid,
		ac.academic_period,
		hourly_time_windows.activity_date,
		hourly_time_windows.window_start,
		hourly_time_windows.window_end,
		sum(not_in_residential) as not_in_residential,
		sum(in_class) as in_class,
		sum(device_moved_buildings) as device_moved_buildings,
		sum(megabytes_transferred) as megabytes_transferred
	from 
		hourly_time_windows 
	left outer join hourly_activity_with_movement_flag
	on 	
		hourly_time_windows.career_account = hourly_activity_with_movement_flag.career_account
		and hourly_time_windows.window_start = hourly_activity_with_movement_flag.window_start
	left outer join banner_oltp.banner_lookup banner 
	on 
		hourly_time_windows.career_account = banner.career_account
	left outer join utility.academic_calendar ac 
	on 
		hourly_time_windows.window_start between ac.last_term_end_date + INTERVAL '1 day' and ac.end_date
	inner join included_students
		on hourly_time_windows.career_account = included_students.career_account
			and ac.academic_period = included_students.academic_period
	group by 1,2,3,4,5,6,7
) distributed by (career_account);
Here is my problem: If I run each query directly and count the rows in the temp table after it is created, all five queries complete in a minute or less. (That's 60 seconds to run all 5, not 60 seconds for each of them.) I created a plpgsql function to run those five queries. The first four completed in about the same time it takes when I run them directly, and the final row counts for all four tables were exactly the same. But I've let the function run for 30+ minutes and the fifth query still never completes. I also tried creating a plpgsql function to run just the first four queries and then running the fifth query directly. Again, the function completes very quickly, and the temp tables it creates have the same row counts as when I run the queries directly, but the fifth query still does not complete. I know PostgreSQL optimizes things differently when run in a function rather than individually, but I really thought that running just the first four in a function and the fifth directly would give different results. I am kind of at my wit's end here. Has anyone run into anything like this before?
lpscott (33 rep)
Mar 25, 2025, 02:59 PM
0 votes
1 answers
113 views
IF NOT EXISTS - seems to be applied at random in postgres
there is a mechanics to `CREATE` or `DROP` objects only whether they exist or not. To me it seems that postgres applies this randomly. ``` CREATE EXTENSION > true CREATE SCHEMA > true CREATE DATABASE > false CREATE ROLE > false ``` I am aware that you can build something in pl/pgsql or create a func...
there is a mechanics to CREATE or DROP objects only whether they exist or not. To me it seems that postgres applies this randomly.
CREATE EXTENSION > true
CREATE SCHEMA > true
CREATE DATABASE > false
CREATE ROLE > false
I am aware that you can build something in pl/pgsql or create a function mimicking this behavior, which requires quite extendet skills. For my understanding ... is there any explanation why this is setup in this 'random' manner?
vrms (269 rep)
Feb 24, 2025, 10:46 AM • Last activity: Feb 24, 2025, 12:16 PM
0 votes
1 answers
2159 views
PostgreSQL - Write loop that creates new tables based on select query
In concept this is simple. I'm constructing a new database. I have all of the tables I want to create in tb.table_build and I want to write a statement that creates new tables for each of the items in the table_build select query. ``` DO $$ DECLARE counterVariable integer; CounterMax Varchar(150); b...
In concept this is simple. I'm constructing a new database. I have all of the tables I want to create in tb.table_build and I want to write a statement that creates new tables for each of the items in the table_build select query.
DO $$
DECLARE counterVariable integer;
		CounterMax Varchar(150);
begin
  for Countermax in (select table_name 
                     from table_build 
                     where build_type = 'Number')
  loop
	CREATE TABLE Countermax (
	LIKE template_text_input);
  end loop;
end;
$$;
I cannot pass the variable CounterMax into the Create Table statement.
J Taylor (1 rep)
Apr 29, 2022, 04:55 AM • Last activity: Feb 14, 2025, 04:02 PM
11 votes
2 answers
22215 views
Call multiple functions from trigger?
I run **PostgreSQL-9.2.4** Is it possible to call 2 functions from a trigger? Let's say I have two functions for two different tables to be executed if the following trigger fires: **Trigger:** CREATE TRIGGER start ON system_status FOR EACH ROW WHEN ((new.event = start_task)) EXECUTE PROCEDURE ...()...
I run **PostgreSQL-9.2.4** Is it possible to call 2 functions from a trigger? Let's say I have two functions for two different tables to be executed if the following trigger fires: **Trigger:** CREATE TRIGGER start ON system_status FOR EACH ROW WHEN ((new.event = start_task)) EXECUTE PROCEDURE ...() **Function 1:** *(When task starts, remove any previously assigned next task for this system)* CREATE FUNCTION void_next_task() RETURNS trigger AS $$ BEGIN DELETE FROM tasks_status ts WHERE ts.system = NEW.system AND ts.event = 'next_task'; RETURN NEW; END; $$ LANGUAGE plpgsql **Function 2:** *(If the inserted combination of task and system is already present in the table, then mark any earlier record with this combination as deleted)* CREATE FUNCTION void_dup_task() RETURNS trigger AS $$ BEGIN UPDATE system_status ss SET deleted = 'TRUE' WHERE ss.system = NEW.system AND ss.task = NEW.task AND ss.deleted IS FALSE; RETURN NEW; END; $$ LANGUAGE plpgsql So I ended up with the following ways to resolve it: 1. To have a trigger which calls two functions; 2. To have a function which performs update on one table and delete on another one; 3. To have two exactly same triggers and two different functions; Before I will go ahead and implement solution 3 could you advice me if solution 1 or 2 are possible at all?
VL-80 (213 rep)
Nov 30, 2013, 06:42 PM • Last activity: Feb 13, 2025, 05:22 PM
0 votes
1 answers
1082 views
"ERROR: Cursor does not exist" in PL/pgSQL procedure
I am trying to transfer a specific amount of film with an specific quantity from one store_id to another. For this I have tried doing a procedure: ~~~pgsql CREATE OR REPLACE PROCEDURE transfer_inventory( p_film_id INTEGER, p_from_store_id INTEGER, p_to_store_id INTEGER, p_quantity INTEGER ) LANGUAGE...
I am trying to transfer a specific amount of film with an specific quantity from one store_id to another. For this I have tried doing a procedure: ~~~pgsql CREATE OR REPLACE PROCEDURE transfer_inventory( p_film_id INTEGER, p_from_store_id INTEGER, p_to_store_id INTEGER, p_quantity INTEGER ) LANGUAGE plpgsql AS $$ DECLARE v_inventory_id INTEGER; v_current_quantity INTEGER; v_transferred_quantity INTEGER := 0; v_remaining_quantity INTEGER; v_total_available_quantity INTEGER := 0; v_cursor CURSOR FOR SELECT inventory_id, COUNT(*) AS film_count FROM inventory WHERE film_id = p_film_id AND store_id = p_from_store_id GROUP BY inventory_id; BEGIN SELECT COALESCE(SUM(film_count), 0) INTO v_total_available_quantity FROM ( SELECT COUNT(*) AS film_count FROM inventory WHERE film_id = p_film_id AND store_id = p_from_store_id GROUP BY inventory_id ) AS temp; RAISE NOTICE 'Se desean transferir % unidades de la película %.', p_quantity, p_film_id; RAISE NOTICE 'Hay % unidades disponibles para transferir.', v_total_available_quantity; IF v_total_available_quantity = v_remaining_quantity THEN RAISE NOTICE 'Transferencia de % unidades de la película % a la tienda %.', v_remaining_quantity, p_film_id, p_to_store_id; EXIT WHEN v_transferred_quantity = p_quantity; ELSE -- No enough units to transfer RAISE NOTICE 'No hay suficientes películas para completar la transferencia.'; ROLLBACK; EXIT; END IF; END LOOP; -- Cursor closure CLOSE v_cursor; -- Verify if all units where transfered IF v_transferred_quantity SQL Error : > ERROR: Cursor "" does not exist > Where: PL/pgSQL function transfer_inventory(integer,integer,integer,integer) on line 72 in CLOSE Apparently, it's a closure issue of my cursor, but it is being closed after de loop ends. I am new to plpgsql so my solutions are limited.
matuco1998 (1 rep)
May 6, 2024, 04:33 PM • Last activity: Jan 5, 2025, 05:04 PM
0 votes
1 answers
2290 views
Pass 'interval' value to date_trunc function in PostgreSQL procedure
I am fetching the **"age"** of **two timestamp columns** (**End and Start**) and further choosing it as **to_char 'MM'** format. So I have the difference of the months from two timestamp columns. The syntax in the above statement goes like : i = (select to_char(age(End,Start),'MM')); interval_value...
I am fetching the **"age"** of **two timestamp columns** (**End and Start**) and further choosing it as **to_char 'MM'** format. So I have the difference of the months from two timestamp columns. The syntax in the above statement goes like : i = (select to_char(age(End,Start),'MM')); interval_value = (select i || ' month'); **Also tried:** interval_value = i || ' Month' Now, Passing the value of 'i' in another function **date_trunc** of PostgreSQL. xyz = (select date_trunc('month',Start::date) + interval_value::text::interval); The data types for the above variables are: i numeric :=0 xyz date; interval_value varchar; But it doesn't seem to work. Is there any alternate approach I could use here. The main idea is to get the difference from two timestamps in months and then further passing the difference into the date_trunc function.
Pranjal Kaushik (1 rep)
Mar 18, 2019, 08:28 AM • Last activity: Dec 27, 2024, 07:02 AM
9 votes
4 answers
12093 views
Copy indexes from one table to another
I have a series of ETL jobs in which I create a `swap` table using `CREATE TABLE table1_swap LIKE table1`. In order to make the populating of `table1_swap` faster I do not include the indexes. When I'm finished loading however I need to re-apply those indexes to the newly populated table. These inde...
I have a series of ETL jobs in which I create a swap table using CREATE TABLE table1_swap LIKE table1. In order to make the populating of table1_swap faster I do not include the indexes. When I'm finished loading however I need to re-apply those indexes to the newly populated table. These indexes are created outside of the scope of these ETL jobs so I would rather not have to hardcode the CREATE INDEX calls if I don't have to. Is it possible to "transfer" or "copy" a set of indexes from one table to another?
Kyle Decot (359 rep)
Nov 2, 2015, 07:38 PM • Last activity: Dec 11, 2024, 05:02 AM
2 votes
0 answers
58 views
Postgres - Extreme query planning times on simple query
On a 200mb table fetching entries takes forever due to high query planning times: ``` EXPLAIN (ANALYZE, BUFFERS, TIMING) SELECT id FROM tour_bookings WHERE id = 6311251; ``` ``` | QUERY PLAN | | ---------------------------------------------------------------------------------------------------------...
On a 200mb table fetching entries takes forever due to high query planning times:
EXPLAIN (ANALYZE, BUFFERS, TIMING) 
SELECT id FROM tour_bookings WHERE id = 6311251;
| QUERY PLAN                                                                                                                            |
| ------------------------------------------------------------------------------------------------------------------------------------- |
| Index Only Scan using tour_bookings_pkey on tour_bookings  (cost=0.42..2.64 rows=1 width=8) (actual time=1.841..1.843 rows=1 loops=1) |
|   Index Cond: (id = 6311251)                                                                                                          |
|   Heap Fetches: 1                                                                                                                     |
|   Buffers: shared hit=4 read=3                                                                                                        |
| Planning:                                                                                                                             |
|   Buffers: shared hit=6088 read=17450                                                                                                 | 
| Planning Time: 4429.900 ms                                                                                                            |
| Execution Time: 2.973 ms                                                                                                              |
No success: - vacuuming - disabling triggers - disabling complex constraints - clustering pkey - analyse related tables A duplicated version of the table doesn't face those issues. Buffer access in planning seem extensive. How to find out what and why the query planner actually does?
devrob (21 rep)
Nov 28, 2024, 11:36 PM • Last activity: Nov 28, 2024, 11:37 PM
3 votes
1 answers
2302 views
Column name as argument for a trigger function
In my Postgres 14 database, I have a function that updates the `updated_at` field in a table for which the trigger was fired: ```sql CREATE TEMPORARY TABLE types( id SMALLINT GENERATED ALWAYS AS IDENTITY, type TEXT UNIQUE, updated_at TIMESTAMPTZ ); CREATE OR REPLACE FUNCTION update_column() RETURNS...
In my Postgres 14 database, I have a function that updates the updated_at field in a table for which the trigger was fired:
CREATE TEMPORARY TABLE types(
    id SMALLINT GENERATED ALWAYS AS IDENTITY,
    type TEXT UNIQUE,
    updated_at TIMESTAMPTZ
);

CREATE OR REPLACE FUNCTION update_column()
RETURNS TRIGGER AS $$
BEGIN
    NEW.updated_at = now();
    RETURN NEW;
END;
$$ language 'plpgsql';

CREATE TRIGGER column_was_updated BEFORE INSERT OR UPDATE OF type
ON types FOR EACH ROW EXECUTE PROCEDURE
update_column();

INSERT INTO types (type)
VALUES ('type1'), ('type2');

SELECT * FROM types;
I want to do the following: when calling the function from the trigger, pass a column name as an argument to the function to replace the literal column name updated_at, something like this:
CREATE OR REPLACE FUNCTION update_column()
RETURNS TRIGGER AS $$
BEGIN
    NEW.column_name = now();
    RETURN NEW;
END;
$$ language 'plpgsql';


CREATE TRIGGER column_was_updated BEFORE INSERT OR UPDATE OF type
ON types FOR EACH ROW EXECUTE PROCEDURE
update_column('column_name');
How can I achieve this?
fallincode (133 rep)
Oct 26, 2021, 03:59 PM • Last activity: Nov 20, 2024, 01:03 PM
Showing page 1 of 20 total questions