Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
171 views
Filter a SELECT query with predicates passed as single hstore value
For an API I am building I am supposed to write a function with a single `hstore` argument containing `colname => value` pairs so that queries may be filtered with a `WHERE` clause including an indefinite amount of subclauses. I currently have the following function definition (not a MWE): ``` CREAT...
For an API I am building I am supposed to write a function with a single hstore argument containing colname => value pairs so that queries may be filtered with a WHERE clause including an indefinite amount of subclauses. I currently have the following function definition (not a MWE):
CREATE OR REPLACE FUNCTION api.func( conds HSTORE )
RETURNS TABLE (LIKE api.tab) AS 
$$
BEGIN
	RETURN QUERY
		SELECT * 
		FROM api.tab 
		WHERE -- conds here
        LIMIT 25 ;
END;
$$ LANGUAGE plpgsql;
I am not really sure how to continue. Could someone please give me a hint?
eslukas (111 rep)
Aug 8, 2022, 11:45 AM • Last activity: Jul 18, 2025, 09:06 PM
0 votes
1 answers
52 views
sp_executesql with ISNUMERIC(@ColumnName) not working with column name passed as parameter
**Question**: Why example 1 does not work while example 2 works: **SQL Table to check for non-numeric values in varchar column**: CREATE TABLE #t(Mycol varchar(15)) INSERT #t VALUES ('123.86'),('4a5.67'),('45.68'),('45r8.78') Query to check the NON-NUMERIC values in the table: SELECT Mycol from #t W...
**Question**: Why example 1 does not work while example 2 works: **SQL Table to check for non-numeric values in varchar column**: CREATE TABLE #t(Mycol varchar(15)) INSERT #t VALUES ('123.86'),('4a5.67'),('45.68'),('45r8.78') Query to check the NON-NUMERIC values in the table: SELECT Mycol from #t WHERE ISNUMERIC(Mycol) = 0 **Output**: | Mycol | |---------------------| | 4a5.67 | | 45r8.78 | **GOAL**: Achieve the same by using dynamic SQL **Example 1**: Did not work, why? How can we improve this code without declaring variables outside EXECUTE sp_executesql statement; or is it even possible? EXECUTE sp_executesql N'SELECT @colName as Mycol from #t WHERE ISNUMERIC(@colName) = 0', N'@colName varchar(15)', @colName = N'Mycol'; **Output**: | Mycol | |---------------------| | Mycol | | Mycol | | Mycol | | Mycol | **Example 2**: Works - but required more variable declarations. DECLARE @Qry nvarchar(150), @colName varchar(15) = 'Mycol' SET @Qry = N'SELECT ' + @colName + ' FROM #t WHERE ISNUMERIC(' + @colName + ') = 0' EXECUTE sp_executesql @Qry, N'@colName varchar(15)', @colName = N'Mycol'; Output: | Mycol | |---------------------| | 4a5.67 | | 45r8.78 |
nam (515 rep)
Jul 14, 2025, 08:00 PM • Last activity: Jul 16, 2025, 10:16 PM
2 votes
1 answers
1811 views
How to specify with a parameter a field to extract from a RECORD variable
Here's a table: CREATE TABLE t_heights (id INT, height REAL); INSERT INTO t_heights VALUES (1,53.63), (2,45.19), (3,47.06); Reading through the table rows with a FOR...LOOP and a RECORD type variable like this works: CREATE OR REPLACE FUNCTION fnct_row_by_row (input_table regclass) RETURNS VOID LANG...
Here's a table: CREATE TABLE t_heights (id INT, height REAL); INSERT INTO t_heights VALUES (1,53.63), (2,45.19), (3,47.06); Reading through the table rows with a FOR...LOOP and a RECORD type variable like this works: CREATE OR REPLACE FUNCTION fnct_row_by_row (input_table regclass) RETURNS VOID LANGUAGE plpgsql AS $$ DECLARE current_row RECORD; BEGIN FOR current_row IN EXECUTE 'SELECT * FROM '||input_table LOOP RAISE NOTICE 'field value: %',current_row.height; -- *<<< hardcoded field name* END LOOP; END; $$ ; Results: SELECT fnct_row_by_row ('t_heights') ; NOTICE: field value: 53.63 NOTICE: field value: 45.19 NOTICE: field value: 47.06 However, the field extracted from the RECORD variable needs to have its name hardcoded. **How to specify dynamically the field to be extracted?** The following generates an error: CREATE OR REPLACE FUNCTION fnct_row_by_row2 (input_table regclass, input_field_name TEXT) RETURNS VOID LANGUAGE plpgsql AS $$ DECLARE current_row RECORD; BEGIN FOR current_row IN EXECUTE 'SELECT * FROM '||input_table LOOP RAISE NOTICE 'specified field: %',current_row.input_field_name; -- *<<< field name from parameter* END LOOP; END; $$ ; Execution: SELECT fnct_row_by_row2 ('t_heights','height') ; ERROR: record "current_row" has no field "input_field_name" CONTEXTE : SQL statement "SELECT current_row.input_field_name" PL/pgSQL function fnct_row_by_row2(regclass,text) line 10 at RAISE
S&#233;bastien Cl&#233;ment (1825 rep)
Jan 24, 2018, 09:58 PM • Last activity: Jul 7, 2025, 07:05 PM
1 votes
1 answers
163 views
How can I execute Openrowset in the named instance I am currently in?
I am running [this openrowset query about jobs][1] on my current server and I see that the result set is not correct. I was expecting a bunch of [jobs][2] but I only get one. when I run the following piece of code I can see why: ``` select [my current server] = @@servername select * from openrowset(...
I am running this openrowset query about jobs on my current server and I see that the result set is not correct. I was expecting a bunch of jobs but I only get one. when I run the following piece of code I can see why:
select [my current server] = @@servername

select *
from openrowset('SQLNCLI','Server=(local);Trusted_Connection=yes;',
'select [my current server] = @@servername') AS a
that gives me the following result: enter image description here 'MATHURA` is my server name (machine name) 'MATHURA\SQL2016` is the server name\instance name where I am executing this query obviously when I pass Server=(local); to the openrowset it takes it to be my server name, but I am running on the mathura\sql2016 instance. the question is: how can I pass the currently named instance to the openrowset ?
Marcello Miorelli (17274 rep)
Jun 29, 2025, 04:29 PM • Last activity: Jun 30, 2025, 01:06 AM
1 votes
4 answers
2050 views
T-SQL How to retrieve metadata from SQL proc containing temp tables
I've a T-SQL stored proc running in a Microsoft SQL Server 2019 environment. The proc contains temp tables, dynamic SQL and is 2000+ lines. The proc's output has been verified and tested for completeness and accuracy; re-writing the proc is not an option. Attempts using sp_describe_first_result_set...
I've a T-SQL stored proc running in a Microsoft SQL Server 2019 environment. The proc contains temp tables, dynamic SQL and is 2000+ lines. The proc's output has been verified and tested for completeness and accuracy; re-writing the proc is not an option. Attempts using sp_describe_first_result_set and OPENROWSET to retrieve metadata have failed due to the use of temp tables and dynamic SQL. Research links used: and this. Are there any other approaches to retrieve metadata? Thoughts?
TPV (11 rep)
Mar 26, 2023, 01:24 PM • Last activity: Jun 29, 2025, 04:06 PM
0 votes
1 answers
948 views
Problem calling Postgres function
I wrote this function: ``` create or replace function field_summaries(gap interval , _tbl anyelement, _col text) returns table(hour text, average numeric, minimum numeric, maximum numeric) as $func$ begin RETURN QUERY EXECUTE format('select time_bucket($1, time)::text as hour, avg(%I), min(%I), max(...
I wrote this function:
create or replace function field_summaries(gap interval , _tbl anyelement, _col text)                                
returns table(hour text, average numeric, minimum numeric, maximum numeric) as
$func$   
begin
 RETURN QUERY 
  EXECUTE format('select time_bucket($1, time)::text as hour,                                                                                      
                         avg(%I), 
                         min(%I),
                         max(%I)                                                                                                         
                  from %I d                                                                                                         
                  where d in (select device_id from sensors)                                                                                           
                  group by hour', _col, _col, _col, _tbl)
     USING gap; --<< this passes the parameter to the EXECUTE command
END;
$func$ 
language plpgsql
stable;
to which I pass a table name and a column name.I call it like this : select field_summaries('5minutes', NULL::m_09, 'current'); which works fine. I need to call this function from my node.js code. so I need to pass a string as an argument. When I pass "Null::m_09" as a string I get an error. Is there a way to alter the function so that I can call it like this : select field_summaries('5minutes', m_09, 'current'); Right now I get this when I do it:
field_summaries('5minutes', m_09, 'current');
ERROR:  column "m_09" does not exist
LINE 1: select field_summaries('5minutes', m_09, 'current');
                                           ^
Any help is much appreciated.
Hedi Hadhri (3 rep)
Mar 14, 2021, 04:51 PM • Last activity: Jun 23, 2025, 06:09 AM
0 votes
1 answers
228 views
SQL stored procedure to produce custom XML text
I have around 30 tables from which I need to generate a specific XML format, which is consistent through the tables. With my current solution I have a stored procedure for each 30 tables where I do: select ' ' + ' ' + ' ' + ' ' + ' ' + ' ' + ' ' + from dbo.table where the parantheses values are hard...
I have around 30 tables from which I need to generate a specific XML format, which is consistent through the tables. With my current solution I have a stored procedure for each 30 tables where I do: select '' + '' + '' + '' + '' + '' + '' + from dbo.table where the parantheses values are hardcoded for each table. So for example in one table (id column) is CUSTOMER_ID. While in a different table (id column) is STORE_ID. and (fieldname) is for example CUSTOMER_NAME. So they are all hardcoded column names in the brackets and primary key in . So my stored procedure with all 30 tables is around 1000 lines long, as every column and primary key is hardcoded for each table. I'm certain there is a dynamic way to achieve this functionality where columns/rows are somehow iterated over, but I can't figure it out.
Anton (101 rep)
Jan 21, 2020, 01:15 PM • Last activity: Jun 11, 2025, 06:04 PM
2 votes
1 answers
3533 views
Postgresql return results of dynamic SQL
I'm using Postgresql, version 12. My goal is to create a function, or stored procedure, which generates a dynamic select which is returned by the function or stored procedure. The difficulty is that the form of the table which is returned is not known in advance. This is a follow up question to a pr...
I'm using Postgresql, version 12. My goal is to create a function, or stored procedure, which generates a dynamic select which is returned by the function or stored procedure. The difficulty is that the form of the table which is returned is not known in advance. This is a follow up question to a previous thread , in which I learned how to dynamically create a table within a function with a lot of help from the StackExchange community. But now I realize it would be much nicer, instead of creating a new table inside the function, to just return the result of the select. The actual problem I need to solve is as follows (this is repeated from the previous thread): Say I have a table with the following information: Table nominal_dollars:
year       GDP        Dividends
2000      100              20
2001      110              30
2002      120              40
In the above table, the numbers are not adjusted for inflation. A common procedure I want to do is create a new table, in which the numbers are adjusted for inflation. This will mean I need to join the above table with a new table, the consumer price inflation (CPI), which has the following form Table cpi:
year       cpi_adjustment
2000      1            
2001      2             
2002      3
This will allow the creation of a new table, in which all of the numbers are adjusted for inflation, i.e. the original numbers are multiplied by the cpi adjustment: Table nominal_dollars_adj:
year       GDP        Dividends
-----------------------------------------
2000      100              20
2001      220              60
2002      360              120
Where GDP for the new table equals GDP from the original table * cpi_adjustment, and the same for dividends. Now, I want to do this CPI adjustment procedure for many different tables --- which may have different numbers of columns. The answer which was given by is Erwin is really nice (see below), and generates a new table dynamically. But my question remains --- instead of creating the new table, can I return the results in a select?
CREATE OR REPLACE FUNCTION f_results_inflation_adjusted(_tbl text)
  RETURNS void LANGUAGE plpgsql AS
$func$
BEGIN
   -- RAISE NOTICE '%', (  -- use instead of EXECUTE for debugging
   EXECUTE (
   SELECT format('DROP TABLE IF EXISTS public.%1$I;
                  CREATE TABLE public.%1$I AS SELECT %3$s
                  FROM public.%2$I t JOIN public.cpi c USING (year)'
               , _tbl || '_adj'     -- %1
               , _tbl               -- %2
               , string_agg(        -- %3
                  CASE a.attname
                     WHEN 'id'   THEN 't.id'    -- case sensitive!
                     WHEN 'year' THEN 't.year'
                     ELSE format('t.%1$I * c.cpi_adjustment AS %1$I', a.attname)
                  END, ', ' ORDER BY a.attnum
                 )
          )
   FROM   pg_catalog.pg_attribute a
   JOIN   pg_catalog.pg_class c ON c.oid = a.attrelid
   WHERE  c.relname = _tbl
   AND    c.relnamespace = 'public'::regnamespace
   AND    NOT a.attisdropped
   AND    a.attnum > 0
   );
END
$func$;
Jacob (35 rep)
Nov 5, 2019, 03:18 AM • Last activity: Jun 6, 2025, 06:04 PM
1 votes
1 answers
2452 views
Use variable as column name
I'm trying to create trigger in PostgreSQL 14: ``` create or replace function add_stats_to_player() returns trigger as $add_stats_to_player$ declare equipment_id int := new.player_equipment_armor['armor_id']; equipment_stats varchar[] := array(select jsonb_object_keys(armor_stats['stats']) from equi...
I'm trying to create trigger in PostgreSQL 14:
create or replace function add_stats_to_player() returns trigger as $add_stats_to_player$
    declare
        equipment_id int := new.player_equipment_armor['armor_id'];
        equipment_stats varchar[] := array(select jsonb_object_keys(armor_stats['stats']) from equipments_armor where armor_id = equipment_id);
        equipment_stat varchar;

    begin
        raise notice '%', equipment_id;
        foreach equipment_stat in array equipment_stats loop
                if old.player_equipment_armor['stats'][equipment_stat] is not null then
                     update players set equipment_stat = equipment_stat - old.player_equipment_armor['stats'][equipment_stat] + new.player_equipment_armor['stats'][equipment_stat] where player_id = new.player_id;
                end if;
        end loop;
        return new;
    END;
$add_stats_to_player$ language plpgsql;

create trigger add_stats_to_player after insert or update of player_equipment_armor on players
    for each row WHEN (pg_trigger_depth() < 1) execute function add_stats_to_player();
Can I make a column name from the variable equipment_stat, or is that a bad idea? **update** Here's how I did it
create or replace function add_stats_to_player() returns trigger as $add_stats_to_player$
    declare
        equipment_id int := new.player_equipment_armor['armor_id'];
        equipment_stats varchar[] := array(select jsonb_object_keys(armor_stats['stats']) from equipments_armor where armor_id = equipment_id);
        equipment_stat varchar;
    begin
        if (TG_OP = 'UPDATE') then
            foreach equipment_stat in array equipment_stats loop
                    if old.player_equipment_armor['stats'][equipment_stat] is not null then
                       execute 'update players set ' || equipment_stat || ' = ' || equipment_stat || ' - ' || old.player_equipment_armor['stats'][equipment_stat] || '+' || new.player_equipment_armor['stats'][equipment_stat] || ' where player_id = ' || new.player_id;
                    else
                       execute 'update players set ' || equipment_stat || ' = ' || new.player_equipment_armor['stats'][equipment_stat] || ' where player_id = ' || new.player_id;
                    end if;
            end loop;
        elseif  (TG_OP = 'INSERT') then
            foreach equipment_stat in array equipment_stats loop
                execute 'update players set ' || equipment_stat || ' = ' || new.player_equipment_armor['stats'][equipment_stat] || ' where player_id = ' || new.player_id;
            end loop;
        end if;
        return new;
    end;
$add_stats_to_player$ language plpgsql;

create trigger add_stats_to_player after insert or update of player_equipment_armor on players
    for each row WHEN (pg_trigger_depth() < 1) execute procedure add_stats_to_player();
Sevas (11 rep)
May 7, 2022, 01:34 PM • Last activity: May 9, 2025, 07:04 PM
1 votes
1 answers
1855 views
Executing an oracle variable string as I do in SQL Server
In SQL Server I can do something like: DECLARE @VAR VARCHAR(100); SELECT @VAR = SELECT * FROM SYS.DATABASES EXECUTE(@VAR) How can I execute a string that I've created inside a variable in Oracle? This is the query: DECLARE BUSCACOLUNA VARCHAR(2000); BEGIN BUSCACOLUNA:= 'SELECT '' SELECT '' || LTRIM(...
In SQL Server I can do something like: DECLARE @VAR VARCHAR(100); SELECT @VAR = SELECT * FROM SYS.DATABASES EXECUTE(@VAR) How can I execute a string that I've created inside a variable in Oracle? This is the query: DECLARE BUSCACOLUNA VARCHAR(2000); BEGIN BUSCACOLUNA:= 'SELECT '' SELECT '' || LTRIM(listagg ( '' , '' || ''"'' || T1.COLUMN_NAME || ''"'' || '' AS '' ||''"''|| T2.DESCRICAO ||''"'' ) within group ( order by t1.column_name ),'' , '') || '' FROM TABELA_ENTRADA '' FROM ALL_TAB_COLUMNS T1 INNER JOIN TABBASE T2 ON T1.COLUMN_NAME=T2.NO_COL_TABBASE WHERE T1.TABLE_NAME=''TABELA_ENTRADA'' AND T2.CD_CONTEUDO_ARQUIVO= ''X'''; END; The result of that dynamic SQL is something like: SELECT C1 AS B1, C2 AS B2...FROM TABLE And I would like to execute this SELECT by executing the VARIABLE. I tried with a loop, with EXECUTE IMMEDIATE but the query only returns: > anonymous block completed So I can understand it's working. If it was an UPDATE or INSERT, it would work, But I would like the return of that.
Racer SQL (7546 rep)
Jul 1, 2019, 09:13 PM • Last activity: May 7, 2025, 07:08 PM
0 votes
1 answers
386 views
Run SSIS Package but the package path will be decided runtime
I have to import 16 Excel files into 16 tables through agent. The agent will process the data further. I have created a SSIS package for each . I don't want to create 16 jobs to do same steps. My idea : 1. Pass the SSIS package to Job (Looping the path in procedure and call the job) 2. Once the data...
I have to import 16 Excel files into 16 tables through agent. The agent will process the data further. I have created a SSIS package for each . I don't want to create 16 jobs to do same steps. My idea : 1. Pass the SSIS package to Job (Looping the path in procedure and call the job) 2. Once the data is imported , on success / failure : different steps will be processed . 3. Once success , the next Package will be processed (Step 1) Can this be done without Bulk Insert
Lakshmi R (119 rep)
Feb 6, 2023, 10:30 AM • Last activity: Apr 23, 2025, 02:05 AM
1 votes
3 answers
834 views
Copy table structure excluding nullability into local temporary table
How can I copy the structure of a table into a temp table (or table variable) **excluding the nullability** of each column? Mainly I need to copy the column names and data types, but not nullability, keys, indexes, etc. I'll also note that this must be done programmatically within a procedure, so I...
How can I copy the structure of a table into a temp table (or table variable) **excluding the nullability** of each column? Mainly I need to copy the column names and data types, but not nullability, keys, indexes, etc. I'll also note that this must be done programmatically within a procedure, so I can't just generate a create table script and modify it. The best I've come up with has been: select top 0 * into #MyTempTable from dbo.MyTable which copies the nullability of each field, thus defeating the purpose for me. I've also played around with dynamic SQL and extracting the column data from table INFORMATION_SCHEMA.COLUMNS to build the create table statement, but the issue there is that the temp table goes out of scope after the dynamic SQL statement executes and control is returned to the "main" procedure. (And I'd rather not jam the rest of the procedure into the dynamic SQL statement.) If there were a way to return the temp table from the exec (@Sql) statement or keep it in scope somehow, it might work, but I don't know that there's a way to do that. Related questions: - Copy complete structure of a table - Create table structure from existing table
neizan (113 rep)
Nov 12, 2015, 02:04 PM • Last activity: Apr 7, 2025, 12:46 PM
4 votes
2 answers
265 views
Dynamic NVARCHAR(MAX) Being Truncated When 'inlining' a NVARCHAR(MAX) Parameter
The @SQL variable in the statement below is always truncated when I "inline" a variable. If I send the variable in and print it does not truncate to 4000, however, I can't bind the sent in variable into a USE [@DatabaseName] statement. How to get around. **This works, however, the database name is h...
The @SQL variable in the statement below is always truncated when I "inline" a variable. If I send the variable in and print it does not truncate to 4000, however, I can't bind the sent in variable into a USE [@DatabaseName] statement. How to get around. **This works, however, the database name is hardcoded in the USE statement** DECLARE @DatabaseName NVARCHAR(MAX) = 'MyDatabase' DECLARE @Sql NVARCHAR(MAX)= ' USE [MyDatabase] PRINT @DatabaseName SELECT ...<20,000 more chars ' DECLARE @params NVARCHAR(MAX) ='@DatabaseName NVARCHAR(MAX)' EXEC sp_executesql @SQL, @params,@DatabaseName **This throws an exceptionn with the message '*Database '@DatabaseName' does not exist. Make sure that the name is entered correctly*.'** DECLARE @DatabaseName NVARCHAR(MAX) = 'MyDatabase' DECLARE @Sql NVARCHAR(MAX)= ' USE [@DatabaseName] PRINT @DatabaseName SELECT ...<20,000 more chars ' DECLARE @params NVARCHAR(MAX) ='@DatabaseName NVARCHAR(MAX)' EXEC sp_executesql @SQL, @params,@DatabaseName **This truncates the query to NVARCHAR(4000)** DECLARE @DatabaseName NVARCHAR(MAX) = 'MyDatabase' DECLARE @Sql NVARCHAR(MAX)= ' USE ['+@DatabaseName+'] PRINT @DatabaseName SELECT ...<20,000 more chars ' DECLARE @params NVARCHAR(MAX) ='@DatabaseName NVARCHAR(MAX)' EXEC sp_executesql @SQL, @params,@DatabaseName **Also tried using N'string' without success** DECLARE @DatabaseName NVARCHAR(MAX) = N'MyDatabase' DECLARE @Sql NVARCHAR(MAX)= N' USE ['+@DatabaseName+'] PRINT @DatabaseName SELECT ...<20,000 more chars ' DECLARE @params NVARCHAR(MAX) =N'@DatabaseName NVARCHAR(MAX)' EXEC sp_executesql @SQL, @params,@DatabaseName I think I understand why the USE [@Variable] does not work, whereas I don't know how to explain why USE ['+@Variable+'] is being truncated down to 4000 when bothe vars are max unless sql server is casting down in sp_executesql. Anyway, does anyone know of a workaround to make this work. I basically want to dynamically target any database with a sql statement larger than 4000 nchars.
Ross Bush (683 rep)
Mar 27, 2025, 12:56 PM • Last activity: Mar 27, 2025, 02:16 PM
4 votes
2 answers
2075 views
Update query in a trigger
I have a configuration table: Table: `refdatecol` id int(11) startdatecol varchar(100) enddatecol varchar(100) datecolTable varchar(100) This table holds the user configured **column names and table names**. I am building a query run time using the data from `refdatecol` and updating another table `...
I have a configuration table: Table: refdatecol id int(11) startdatecol varchar(100) enddatecol varchar(100) datecolTable varchar(100) This table holds the user configured **column names and table names**. I am building a query run time using the data from refdatecol and updating another table project_wbs. I am using dynamic SQL. This works fine inside the procedure. Problem is: I want to fetch the current date values into the project_wbs on *insert* OR *update*. Naturally, I thought of using trigger. But triggers do not support dynamic SQL in MySQL. So I thought of calling the procedure from the trigger. Then I learnt it is a strictly no-no. Is there any way I can achieve this? I am also open to considering a complete redesign of this *user-configurable dates* **Update** Here is the stored procedure: create procedure sp_project_date (_id int) begin set @_startdate =''; set @_enddate =''; set @_projectId = (select project_wbs.proj_projectId from project_wbs project_wbs where id = _id); set @q = (select concat('select ', startdatecol, ' , ', enddatecol , ' into @_startdate, @_enddate from ' , datecolTable , ' where project_ProjectId = ' , @_projectId) as 'query' from refdatecol where id = (select p.cogsDateId from project p where p.projectId = @_projectId) ); prepare stmt from @q; execute stmt; UPDATE project_wbs SET project_startdate = @_startdate, project_enddate = @_enddate WHERE id = _id; DEALLOCATE PREPARE stmt;
tempusfugit (141 rep)
Aug 8, 2014, 02:09 PM • Last activity: Feb 19, 2025, 05:05 AM
0 votes
1 answers
157 views
Need to modify Stored Procedure to create table in another database
We have a SQL database that is a secondary database or should I say a secondary replica in an AlwaysOn Availabiltity Group. As a result, the database is always a read-only database and therefore I can't execute the stored procedure below because the stored procedure attempts to create a new table .....
We have a SQL database that is a secondary database or should I say a secondary replica in an AlwaysOn Availabiltity Group. As a result, the database is always a read-only database and therefore I can't execute the stored procedure below because the stored procedure attempts to create a new table .. basically I get hte error message: Failed to update database xxxxxx because the database is read-only. It was suggested that a workaround would be to modify the procedure to SELECT ... INTO a table on a different database (that's not read-only / not part of an AlwaysOn Availability Group) that lives in the same server. E.g: INTO SomeOtherDatabase.' + QUOTENAME(@Domain) + '.' + QUOTENAME(@DeltaTable) + ' Can someone please take a look at my code and help modify the code. I tried the following: INTO tempdb.dbo.@DeltaTable' + QUOTENAME(@Domain) + '.' + QUOTENAME(@DeltaTable) + ' The above didn't work. The full procedure is as follows: CREATE PROCEDURE dbo.GenerateDeltaTable @Domain VARCHAR(100), @TableName VARCHAR(100), @DeltaTable VARCHAR(100) AS BEGIN SET NOCOUNT ON; DECLARE @sql NVARCHAR(MAX); -- Construct dynamic SQL for dropping and creating the target table SET @sql = ' IF OBJECT_ID(''' + QUOTENAME(@Domain) + '.' + QUOTENAME(@DeltaTable) + ''', ''U'') IS NOT NULL DROP TABLE ' + QUOTENAME(@Domain) + '.' + QUOTENAME(@DeltaTable) + '; SELECT T.*, LOWER(CONVERT(VARCHAR(64), HASHBYTES(''SHA2_256'', (SELECT T.* FOR JSON PATH, WITHOUT_ARRAY_WRAPPER, INCLUDE_NULL_VALUES)), 2)) AS signature INTO ' + QUOTENAME(@Domain) + '.' + QUOTENAME(@DeltaTable) + ' FROM ' + QUOTENAME(@Domain) + '.' + QUOTENAME(@TableName) + ' AS T;'; -- Execute the constructed SQL EXEC sp_executesql @sql; END; Any thoughts on how to create the table from the stored procedure to another database
Patterson (123 rep)
Jan 19, 2025, 11:59 AM • Last activity: Jan 19, 2025, 03:03 PM
0 votes
1 answers
539 views
Comparing multiple tables on two databases using dynamic PL/SQL
The issue I'm having is I have a block of dynamic SQL that I can't get the results from. Here's the SQL: CREATE OR REPLACE PROCEDURE GGCHECKER IS cursor c1 is SELECT TABLE_NAME, TABLE_OWNER, PERCENTAGE_CHECK, LAST_UPDATED, PRIMARY_KEY FROM system.TEMP_TABLE; CTABLE_NAME VARCHAR2(50); CTABLE_OWNER VA...
The issue I'm having is I have a block of dynamic SQL that I can't get the results from. Here's the SQL: CREATE OR REPLACE PROCEDURE GGCHECKER IS cursor c1 is SELECT TABLE_NAME, TABLE_OWNER, PERCENTAGE_CHECK, LAST_UPDATED, PRIMARY_KEY FROM system.TEMP_TABLE; CTABLE_NAME VARCHAR2(50); CTABLE_OWNER VARCHAR2(50); CPERCENTAGE_CHECK VARCHAR2(50); LU_CTABLE VARCHAR2(50); PK_CTABLE VARCHAR2(50); var2 varchar2(4000); var3 varchar2(4000); var4 varchar2(4000); var5 varchar2(4000); BEGIN OPEN C1; LOOP FETCH C1 INTO CTABLE_NAME, CTABLE_OWNER, CPERCENTAGE_CHECK, LU_CTABLE, PK_CTABLE; var2:='CREATE TABLE system.COMPARISON_RANDOM AS ( SELECT ' ||PK_CTABLE|| ' FROM ' ||CTABLE_OWNER|| '.' ||CTABLE_NAME|| ' SAMPLE(' ||CPERCENTAGE_CHECK|| ') WHERE ' ||LU_CTABLE|| ' < SYSDATE -1 )'; EXECUTE IMMEDIATE var2; VAR5:=' SELECT '|| PK_CTABLE || ' FROM ( ( SELECT * FROM ' || CTABLE_OWNER || '.' ||CTABLE_NAME|| ' WHERE ' || PK_CTABLE || ' IN ( SELECT ' ||PK_CTABLE|| ' FROM system.COMPARISON_RANDOM ) MINUS SELECT * FROM ' ||CTABLE_OWNER|| '.' ||CTABLE_NAME|| '@DBLINK WHERE ' ||PK_CTABLE|| ' IN ( SELECT ' ||PK_CTABLE|| ' FROM system.COMPARISON_RANDOM ) ) UNION ALL ( SELECT * FROM ' ||CTABLE_OWNER|| '.' ||CTABLE_NAME|| '@DBLINK WHERE ' ||PK_CTABLE|| ' IN ( SELECT ' ||PK_CTABLE|| ' FROM system.COMPARISON_RANDOM ) MINUS SELECT * FROM ' ||CTABLE_OWNER|| '.' ||CTABLE_NAME|| ' WHERE ' ||PK_CTABLE|| ' IN ( SELECT ' ||PK_CTABLE|| ' FROM system.COMPARISON_RANDOM ) ) )'; var3:='DROP TABLE system.COMPARISON_RANDOM'; execute IMMEDIATE var3; EXIT WHEN c1%NOTFOUND; END LOOP; CLOSE C1; var4:='DROP TABLE system.TEMP_TABLE'; execute IMMEDIATE var4; end; / So because it has to go through a number of tables on a number of schemas on each database all the SQL for the comparison of the data has to be dynamic. I'm looking to get the results from the query in variable var5. The issue being that I also need the schema and table name for ID's that are outputted so the data is useful to me. Ideal output would be something like this: SCHEMA_NAME | TABLE_NAME | COL_ID ODD | MIKE | 1234567890 Thanks in advance!
Michael Oddie (124 rep)
Oct 9, 2017, 10:24 AM • Last activity: Dec 30, 2024, 12:03 AM
9 votes
4 answers
12093 views
Copy indexes from one table to another
I have a series of ETL jobs in which I create a `swap` table using `CREATE TABLE table1_swap LIKE table1`. In order to make the populating of `table1_swap` faster I do not include the indexes. When I'm finished loading however I need to re-apply those indexes to the newly populated table. These inde...
I have a series of ETL jobs in which I create a swap table using CREATE TABLE table1_swap LIKE table1. In order to make the populating of table1_swap faster I do not include the indexes. When I'm finished loading however I need to re-apply those indexes to the newly populated table. These indexes are created outside of the scope of these ETL jobs so I would rather not have to hardcode the CREATE INDEX calls if I don't have to. Is it possible to "transfer" or "copy" a set of indexes from one table to another?
Kyle Decot (359 rep)
Nov 2, 2015, 07:38 PM • Last activity: Dec 11, 2024, 05:02 AM
1 votes
2 answers
1488 views
INSERT INTO... SELECT over a LinkedServer in smaller batches
I've written the following code to move data from one database table, to another table (over a LinkedServer); SET NOCOUNT ON DECLARE @ApplicationURL varchar(50), @LinkedServer nvarchar(50), @DatabaseName varchar(50) SET @ApplicationURL = 'Test' SET @LinkedServer = 'ABC123' SET @DatabaseName = 'Test'...
I've written the following code to move data from one database table, to another table (over a LinkedServer); SET NOCOUNT ON DECLARE @ApplicationURL varchar(50), @LinkedServer nvarchar(50), @DatabaseName varchar(50) SET @ApplicationURL = 'Test' SET @LinkedServer = 'ABC123' SET @DatabaseName = 'Test' /*-------------------------------------------------------------------- Table: Notifications --------------------------------------------------------------------*/ EXECUTE ( 'INSERT INTO dbo.Notifications ( [Subject], [Body], [PriorityId], [StartDate], [EndDate], [IsActive], [UserId], [SourceNotificationId] ) SELECT [Subject], [Body], [PriorityId], [StartDate], [EndDate], n.[IsActive], [DS.DataMigration].[Migration].[ufnGetNewUserId](n.[UserId]), [NotificationId] FROM ['+ @LinkedServer +'].['+@DatabaseName+'].[dbo].[Notifications] n JOIN ['+ @LinkedServer +'].['+@DatabaseName+'].[dbo].[Users] u ON n.UserId = u.UserId JOIN ['+ @LinkedServer +'].['+@DatabaseName+'].[dbo].[Applications] a ON u.ApplicationId = a.ApplicationId WHERE A.ApplicationURL = ''' + @ApplicationURL + ''';') This works as expected, however some tables are likely to have over a million rows, and I do not want to try and move them all at the same time, in one transaction. In my investigation, I found this post; https://dba.stackexchange.com/questions/86517/breaking-a-big-insert-query-into-smaller-ones I would like to use this method, but I've been unable to find out how to implement this with Dynamic SQL. Has anybody done something like this before? Thanks!
Tom.Wheater (193 rep)
Jan 3, 2020, 12:12 PM • Last activity: Dec 9, 2024, 05:07 AM
0 votes
1 answers
161 views
Why can I not capture dynamic SQL called with EXEC using the sqlserver.sql_batch_completed Extended Event?
It is my understanding that dynamic SQL is part of a batch. Yet, when I listed for `sqlserver.sql_batch_completed` as follows ```sql CREATE EVENT SESSION [CaptureBatch] ON SERVER ADD EVENT sqlserver.sql_batch_completed( ACTION(sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.database_id...
It is my understanding that dynamic SQL is part of a batch. Yet, when I listed for sqlserver.sql_batch_completed as follows
CREATE EVENT SESSION [CaptureBatch] ON SERVER 
ADD EVENT sqlserver.sql_batch_completed(
    ACTION(sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.database_id,sqlserver.database_name,sqlserver.query_hash,sqlserver.query_plan_hash,sqlserver.sql_text)
    WHERE ([sqlserver].[database_id]>(4))) 
ADD TARGET package0.event_file(SET filename=N'CaptureBatch',max_rollover_files=(0))
WITH (STARTUP_STATE=ON);

GO

ALTER EVENT SESSION [CaptureBatch] ON SERVER 
STATE = START;
I cannot see any calls to EXEC that also call EXEC. For example, the final line here does not show in my Extended Event's log.
CREATE PROCEDURE [TestThis]
AS
BEGIN
	SELECT 'Test';
END

GO

EXEC (N'EXEC TestThis');
Why is this? Are these kinds of EXEC not considered a batch or am I missing something? I cannot seem to find any extensive documentation on this event, so forgive me if I have missed some.
J. Mini (1225 rep)
Nov 20, 2024, 09:30 PM • Last activity: Nov 21, 2024, 08:18 AM
4 votes
1 answers
155 views
Are non-used CTEs never materialized in PostgreSQL 16?
I want to return my post rows in one of three sort orders, controlled by a `sorting_param` function parameter. The sorting is complex, doesn't lend itself to a simple `ORDER BY CASE ...`. So I'm thinking of writing each sorting into its own CTE, and then picking one of the CTEs as the final output....
I want to return my post rows in one of three sort orders, controlled by a sorting_param function parameter. The sorting is complex, doesn't lend itself to a simple ORDER BY CASE .... So I'm thinking of writing each sorting into its own CTE, and then picking one of the CTEs as the final output. However, I wouldn't want PostgreSQL to waste time on all three, just the one I selected. So will all three be materialized or only the one I picked? Or is there a better way to do this? PostgreSQL version 16. Simplified code below. ~~~pgsql WITH posts_by_order1 AS ( SELECT p.id FROM post p ORDER BY [some complex ordering 1] LIMIT 10 ), posts_by_order2 AS ( SELECT p.id FROM post p ORDER BY [some complex ordering 2] LIMIT 10 ), posts_by_order3 AS ( SELECT p.id FROM post p ORDER BY [some complex ordering 3] LIMIT 10 ) SELECT * FROM posts_by_order1 WHERE sorting_param = 0 UNION ALL SELECT * FROM posts_by_order2 WHERE sorting_param = 1 UNION ALL SELECT * FROM posts_by_order3 WHERE sorting_param = 2; ~~~ Unfortunately I can't do this: ~~~pgsql ORDER BY CASE WHEN sorting_param = 0 THEN p.date WHEN sorting_param = 1 THEN p.name ELSE NULL END DESC, ... ~~~
Bj&#246;rn Mor&#233;n (143 rep)
Nov 8, 2024, 06:43 AM • Last activity: Nov 9, 2024, 01:21 PM
Showing page 1 of 20 total questions