Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
0 answers
21 views
Oracle Parametrized Hibernate Queries
I am sending a parametrized named query in Java using Hibernate. Locally on app it is taking milliseconds and using Toad also. While on uat api on app after calling the query, it is taking up to 5 min to return results and testing it on Toad on uat it is taking milliseconds.
I am sending a parametrized named query in Java using Hibernate. Locally on app it is taking milliseconds and using Toad also. While on uat api on app after calling the query, it is taking up to 5 min to return results and testing it on Toad on uat it is taking milliseconds.
neameh baydoun (11 rep)
Jul 11, 2025, 09:20 AM • Last activity: Jul 11, 2025, 11:26 AM
0 votes
1 answers
213 views
Possible to update rows with data only accessible via API call?
I'm not good at what I'm working on so apologies if some of my terminology is off. Also not sure if this belongs here, or in a more programming oriented area. I am looking for options to update rows of a mysql database with data only accessible via REST api. In other words... I will create a row wit...
I'm not good at what I'm working on so apologies if some of my terminology is off. Also not sure if this belongs here, or in a more programming oriented area. I am looking for options to update rows of a mysql database with data only accessible via REST api. In other words... I will create a row with some of the data needed. Then I have 4-5 columns I'd like to update with data that I need to gather from the output from an API call
AaronJAnderson (535 rep)
Feb 17, 2019, 11:47 PM • Last activity: Jun 15, 2025, 05:03 AM
0 votes
1 answers
233 views
MySQL's mysql_affected_rows() detects `SELECT INTO` as affected row
Dump of the test database: ``` -- MariaDB dump 10.19 Distrib 10.9.6-MariaDB, for Linux (x86_64) -- -- Host: localhost Database: book -- ------------------------------------------------------ -- Server version 10.9.6-MariaDB -- -- Table structure for table `publisher` -- CREATE TABLE `publisher` ( `I...
Dump of the test database:
-- MariaDB dump 10.19  Distrib 10.9.6-MariaDB, for Linux (x86_64)
--
-- Host: localhost    Database: book
-- ------------------------------------------------------
-- Server version	10.9.6-MariaDB

--
-- Table structure for table publisher
--

CREATE TABLE publisher (
  ID int(10) unsigned NOT NULL AUTO_INCREMENT,
  PublisherName varchar(100) NOT NULL,
  PRIMARY KEY (ID),
  UNIQUE KEY publisher_UN (PublisherName)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci;


--
-- Dumping data for table publisher
--

LOCK TABLES publisher WRITE;

INSERT INTO publisher VALUES (1,'TestCase');

UNLOCK TABLES;

--
-- Dumping routines for database 'book'
--

DELIMITER ;;
CREATE  PROCEDURE ensurePublisher(
	IN v_PublisherName VARCHAR(100)
)
    MODIFIES SQL DATA
BEGIN
	DECLARE pubID INT unsigned;

	SELECT ID INTO pubID FROM publisher WHERE PublisherName = v_PublisherName LIMIT 1;

	IF ISNULL(pubID) THEN
	    INSERT INTO publisher (PublisherName) VALUES (v_PublisherName);
	END IF;
END ;;
DELIMITER ;

-- Dump completed on 2023-06-10 15:24:11
Doing a CALL ensurePublisher("TestCase"); query 100 times will return mysql_affected_rows() 1 even though duplicates won't INSERT (unique key on PublisherName). Is that intended behavior? --- Please note that this is a minimal example I came up with to show the issue I have. This one could easily just be INSERT IGNORE INTO.
Delicious Bacon (99 rep)
Jun 10, 2023, 01:47 PM • Last activity: Jun 14, 2025, 11:06 AM
0 votes
1 answers
25 views
Can you 'monitor' curl/ HTTP requests to Snowflake API to see issues?
I'm using a 3rd party tool that can 'connect' to Snowflake API but is throwing errors. They are translating/ botching something on their end, as I can get the Snowflake API easily working with about 5 other tools. As a Snowflake admin, can I see what 'failed curl' or whatever text is being sent to S...
I'm using a 3rd party tool that can 'connect' to Snowflake API but is throwing errors. They are translating/ botching something on their end, as I can get the Snowflake API easily working with about 5 other tools. As a Snowflake admin, can I see what 'failed curl' or whatever text is being sent to Snowflake is raw form, like the raw https/ curl request? Would this be a monitor/ trace of some kind? (I'm not the main admin).
user45867 (1739 rep)
Mar 3, 2025, 04:32 PM • Last activity: Apr 1, 2025, 06:45 PM
0 votes
0 answers
78 views
How do I store a password in such a way that I can obtain the plain-text password again?
Consider the following scenario: My app is utilizing a third-party API which requires a username and password for authentication. Unlike most APIs, which have one username and password set for the developer to make calls, this API has a unique username and password pair for each customer. Thus, each...
Consider the following scenario: My app is utilizing a third-party API which requires a username and password for authentication. Unlike most APIs, which have one username and password set for the developer to make calls, this API has a unique username and password pair for each customer. Thus, each customer using my app will need to input their username and password for their account on this other service, into my app. I then need to store this data in my own database, as I will need to call the third-party API on my customer's behalf repeatedly without asking them for their credentials over and over. (Several well-known apps use this model, such as the fintech company Plaid.) I know that the de-facto standard for storing passwords in a database is by using a one-way hashing algorithm. However, in my case, I can't see how this would work, as I need to present a plain-text password to the third-party API, and cannot do so if I've only stored hashed information. I've had trouble finding clear or trusted guidance about how to proceed in this situation. I'm open to the idea that there are other avenues with this scenario I haven't considered. How can I store the passwords in a secure way (if someone got access to my database, they couldn't get their hands on plain text passwords) in this situation?
JCollier (101 rep)
Mar 19, 2024, 04:45 AM • Last activity: Mar 19, 2024, 03:30 PM
0 votes
1 answers
221 views
How to use sp_execute_external_script to fetch USD Rate from an API and update a table?
I have this code that gives this error -- Set the API URL SET @url = 'https://www.banxico.org.mx/SieAPIRest/service/v1/series/SF43718/dato/oportuno?token=734b37b3a5099a9d2d39d06478d47e359a9568cd6693116d95b710e6b8be0008'; -- Use built-in SQL Server functions to make HTTP request and obtain response -...
I have this code that gives this error -- Set the API URL SET @url = 'https://www.banxico.org.mx/SieAPIRest/service/v1/series/SF43718/dato/oportuno?token=734b37b3a5099a9d2d39d06478d47e359a9568cd6693116d95b710e6b8be0008 '; -- Use built-in SQL Server functions to make HTTP request and obtain response -- Requires SQL Server 2016 or later DECLARE @json NVARCHAR(MAX); -- Use sys.dm_exec_query_stats to enable SQL Server to execute external scripts -- Indb is the database name USE lndb; EXEC sp_execute_external_script @language = N'Python', @script = N' import requests url = "https://www.banxico.org.mx/SieAPIRest/service/v1/series/SF43718/datos/oportuno?token=734b37b3a5099a9d2d39d06478d47e359a9568cd6693116d95b710e6b8be0008 " response = requests.get(url) json_response = response.text json_response', @output_data_1_name = N'json', @output_data_1 = @json OUTPUT; -- Parse the JSON response using OPENJSON DECLARE @usdRate DECIMAL(18, 6); SELECT @usdRate = value FROM OPENJSON(@json, '$.bmx.series.datos.dato') WITH (value DECIMAL(18, 6) '$'); -- Display the result SELECT @usdRate AS USDExchangeRate; > Msg 297, Level 16, State 101, Procedure sp_execute_external_script, Line 1 [Batch Start Line 0] The user does not have permission to perform this action. Admin already gave me permission Admin already gave me permission Also what is sys.dm_exec_query_stats
Luis Avalos (25 rep)
Dec 13, 2023, 03:52 PM • Last activity: Dec 14, 2023, 02:30 PM
0 votes
2 answers
190 views
Growing SQL Server Data Warehouse - How to reorganize efficiently?
we've been using SQL Server for quite a while now in our company to host all application data. We use a typical set-up with data being loaded from various data sources into a Staging Area, which afterwards feeds different datamarts for their specific purposes (with tailored data stored there). Howev...
we've been using SQL Server for quite a while now in our company to host all application data. We use a typical set-up with data being loaded from various data sources into a Staging Area, which afterwards feeds different datamarts for their specific purposes (with tailored data stored there). However, from my point of view, this generates data "silos", which duplicates so much data and makes it hard to keep all somewhat synchronous. Furthermore, as the number of solutions is growing constantly, the number of "silo" datamarts is growing proportionally, as the requirements are always slightly different and we have a clear instruction to use separate datamarts for each solution. Apps use direct connections to the datamarts then to consume and manipulate the data (as needed). I was thinking a lot about this recently and there are some other initiatives ongoing at the moment to modernize our backend, so I did some research and my idea is to propose the following: 1. keep the Staging Area as-is (plain import of data from different sources into one datamart). There is one schema for each source and corresponding SSIS packages to load this data. This all should stay. 2. create a schema for each solution within the Staging Area. Access rights can be managed on this level then to ensure app consumers can't access the "import schemas" or others, but only the ones that belong to the apps they have access to. As soon as point 5 (API) is implemented, authorization should take place there before connecting to the datamart. 3. replace "silo" datamarts and the regarding ETL jobs (SSIS packages) that currently load the datamarts with materialized (aka "indexed") views in the newly defined schemas 4. move any tables that are used with CRUD operations from the former app datamart into the corresponding new schema in the "Staging Area" datamart (needs to be renamed, as it now covers the entire data warehouse in one datamart) 5. implement a simple API that features GET requests for the defined views and POSTs for required CRUD operations of the tables from point 4 Of course each of these steps has a great impact on the solutions themselves and where they consume data from. Also, of course many hours of development would be needed. But this is not the point for the time being, it's more a general question if this makes sense or how you would set this up on SQL Server. I just have the feeling that we build so much around the data, which also makes it hard in other areas to improve / speed-up backend processes. Many thanks in advance and all the best Benny
Benny (3 rep)
Oct 26, 2023, 10:30 AM • Last activity: Nov 1, 2023, 11:33 AM
0 votes
1 answers
59 views
Does the BigQuery API offer a way to retrieve info on scheduled queries?
Using the BigQuery C# API, I can [retrieve a list of job IDs][1]: BigQueryClient _client = BigQueryClient.Create(...); ... foreach (var page in _client.ListJobs(projectId).AsRawResponses()) if (page.Jobs != null) // This does happen occasionally foreach (var job in page.Jobs) yield return job.Id; Th...
Using the BigQuery C# API, I can retrieve a list of job IDs : BigQueryClient _client = BigQueryClient.Create(...); ... foreach (var page in _client.ListJobs(projectId).AsRawResponses()) if (page.Jobs != null) // This does happen occasionally foreach (var job in page.Jobs) yield return job.Id; This seems to give me all jobs ever run (or at least within some significant time horizon; it's tens of thousands of records). Still, I'd like to get the details for some jobs and see if I'm at least on the right track. I can retrieve a BigQueryJob object using BigQueryClient.GetJob() (there's no C# doc, but the Java sample code is similar ), but the information returned is very limited: current state, any errors encountered, some basic statistics, etc. There's nothing about schedules. Is there a separate command to retrieve details on scheduled queries? I can't find any such methods in the client.
Jon of All Trades (5987 rep)
Sep 13, 2023, 03:16 PM
1 votes
1 answers
849 views
Am I abusing Row Level Security? (RLS)
I have set up a classic SQL Server Multi Tenant DB, Shared Database, Shared Schema, will be cloud hosted in Azure. All access to the DB is via a minimal api. Every row has a tenantId, with a SQL Server security policy setup so tenants can only see data for their tenantId. This all works perfectly. E...
I have set up a classic SQL Server Multi Tenant DB, Shared Database, Shared Schema, will be cloud hosted in Azure. All access to the DB is via a minimal api. Every row has a tenantId, with a SQL Server security policy setup so tenants can only see data for their tenantId. This all works perfectly. Each tenant has multiple users, with each user having multiple jobs. I thought it would be a good idea to prevent the possibility of users getting other users Jobs by setting up an additional policy filter by (tenantId, userId) on the jobs table. Also works perfectly, but I have to wonder if this is overkill and better handled in the API layer. Thanks. nb. Block Policy is applied as well.
Lindsay Mathieson (11 rep)
Jul 14, 2023, 03:53 PM • Last activity: Jul 14, 2023, 04:15 PM
0 votes
1 answers
524 views
MariaDB Maxscale REST API can't show result
hi i have already setup maxadmin on maxscale can access from rest api this is my configuration ..... [CLI_Service] type=service router=cli [CLI_Listener] type=listener service=CLI_Service protocol=maxscaled address=192.168.101.107 socket=default [MaxAdmin_Inet] type=listener service=CLI_Service prot...
hi i have already setup maxadmin on maxscale can access from rest api this is my configuration ..... [CLI_Service] type=service router=cli [CLI_Listener] type=listener service=CLI_Service protocol=maxscaled address=192.168.101.107 socket=default [MaxAdmin_Inet] type=listener service=CLI_Service protocol=HTTPD address=192.168.101.107 port=8010 but when i test the url like this curl --include --basic --user "admin:mariadb" http://192.168.101.107:8010/v1/servers the result is like this HTTP/1.1 200 OK Date: Wed, 13 May 2020 11:33:15 GMT Server: MaxScale(c) v.1.0.0 Connection: close WWW-Authenticate: Basic realm="MaxInfo" Content-Type: application/json Commands must consist of at least two words. Type help for a list of commands whether i miss about the configuration ?
febry (57 rep)
May 13, 2020, 04:35 AM • Last activity: May 15, 2021, 07:03 AM
2 votes
0 answers
56 views
What should the schema look like for an API-based SaaS product?
I'm building a developer tool product, which will be accessible solely by APIs. Following are a few features I've identified that will be needed: 1. Issuing/refreshing API keys 2. Purchasing API credits 3. Subscriptions for API credits 4. Monitoring usage of API for each user What should the SCHEMA...
I'm building a developer tool product, which will be accessible solely by APIs. Following are a few features I've identified that will be needed: 1. Issuing/refreshing API keys 2. Purchasing API credits 3. Subscriptions for API credits 4. Monitoring usage of API for each user What should the SCHEMA look like? Are there any open examples of schemas for such a product? Is there a term for this? Note that this would be quite similar to what something like Stripe or any API-based SaaS is doing. Just looking for a good schema example only. Any ideas?
Aditya Anand (121 rep)
Nov 18, 2020, 05:04 AM
1 votes
1 answers
935 views
Listing function input and output details for stored functions in Postgres 11.5+
I'm trying to build a query that extracts the input and output definitions for the functions in a schema. We've got multiple client types written in multiple languages, and plan to move many queries into stored functions. That way, each client can call a function without writing their own SQL code....
I'm trying to build a query that extracts the input and output definitions for the functions in a schema. We've got multiple client types written in multiple languages, and plan to move many queries into stored functions. That way, each client can call a function without writing their own SQL code. I'd like to at least semi-automate the documentation, so I want to extract the inputs and outputs for each method. As an example, here's the sort of output I'm looking for from a sample method, listed below: schema function_name lang return_type run_as owner_name strict returnset volatile comment is_input item_number name type default api push_log_count_since plpgsql record INVOKER user_bender f t v t 1 since_dts timestamptz api push_log_count_since plpgsql record INVOKER user_bender f t v f 1 days_ago int8 api push_log_count_since plpgsql record INVOKER user_bender f t v f 2 table_name citext api push_log_count_since plpgsql record INVOKER user_bender f t v f 3 server_name citext api push_log_count_since plpgsql record INVOKER user_bender f t v f 4 push_count int8 The method has one input named since_dts and returns a table of results with four columns, as listed above and shown below: CREATE OR REPLACE FUNCTION api.push_log_count_since (since_dts timestamptz) RETURNS TABLE( days_ago int8, table_name extensions.citext, server_name extensions.citext, push_count int8) AS $BODY$ DECLARE -- Subtract two timestamptz values, get an interval, pull out just the days. days_ago bigint := date_part('day', now()::timestamptz - since_dts)::bigint; BEGIN RETURN QUERY SELECT days_ago, ib_table_name as table_name, data_file_info.server_name_ as server_name, count(*) as push_count FROM ascendco.push_log JOIN ascendco.data_file_info on (data_file_info.id = push_log.data_file_id) WHERE push_log.push_dts >= since_dts GROUP BY ib_table_name, data_file_info.server_name_ ORDER BY ib_table_name, data_file_info.server_name_ ; END; $BODY$ LANGUAGE plpgsql VOLATILE SECURITY DEFINER COST 100 ROWS 1000; ALTER FUNCTION api.push_log_count_since (timestamptz) OWNER TO user_bender; I'm not wedded to my big flat output, and will likely convert the data to a JSON: { "schema":"api", "name":"push_log_count_since", "lang":"plpgsql", "return_type":"record", "run_as":"INVOKER", "owner_name":"user_bender", "strict":"false", "returnset":"true", "volatile":"v", "comment":"An COMMENT ON value set on the function.", "inputs":[ { "item_number":1, "name":"since_dts", "type":"timestamptz", "default":null } ], "outputs":[ { "item_number":1, "name":"days_ago", "type":"int8" }, { "item_number":2, "name":"table_name", "type":"citext" }, { "item_number":3, "name":"sever_name", "type":"citext" }, { "item_number":4, "name":"push_count", "type":"int8" } ] } It looks like the data I need is in the pg_proc system catalog, but I'm not sure how to extract it in a way that I can readily use. I've been bashing away at this, and have gotten a lot closer than when I originally posted. The information_schema.parameters view makes things easier. This script is a bit involved, and I'd guess that there are much simpler ways to achieve the same results. I'd be grateful for any help or suggestions on how to improve this. -- The script below takes a function's name+oid and returns a row for each input or output parameter. -- Information about the routine overall is included on each row, such as schema, name, comments. -- For each parameter, you get the absolute ordinal position, positin in the input/output list, -- data type and default value. It's a bit of a Frankenscript, but it's getting close to what I'm after. -- If I can get this working, I'll likely wrap it in a function. WITH function_name_parts AS ( -- The information_shcema catalogs concatenate function_name + oid. -- This provides a unique name for overloaded functions. Parse these bits out. select * from string_to_array('push_log_count_since_329059','_') as parts ), function_identity AS ( -- Take the array from above and built up the combined name, name, and oid. -- ! There must be a simpler way to do all of this. select array_to_string(parts,'_') AS function_name_and_oid, array_to_string(array_remove_element(parts,cardinality(parts)),'_') AS function_name, parts[cardinality(parts)]::oid as function_oid from function_name_parts ) -- Grab information from parameters and routines. I haven't figured out how to get the OWNER out of pg_proc. -- The pg_get_userbyid(pg_proc.proowner) function seems like it should work, but I've not sorted out how to get that working. SELECT parameters.specific_catalog AS database_name, function_name, function_oid, ordinal_position, row_number() OVER (partition by parameter_mode order by ordinal_position) as input_output_position, parameter_mode, parameter_name, parameters.data_type, parameters.udt_name, parameter_default, external_language, security_type, get_function_owner_name(function_oid) as owner_name, -- Is there a smarter way to do this than a custom function? obj_description(function_oid) AS comment FROM function_identity, information_schema.routines LEFT JOIN information_schema.parameters ON (information_schema.routines.specific_name = parameters.specific_name) WHERE parameters.specific_name = function_identity.function_name_and_oid ORDER BY ordinal_position This returns output like the following: database_name function_name function_oid ordinal_position input_output_position parameter_mode parameter_name data_type udt_name parameter_default external_language security_type owner_name comment squid push_log_count_since 329059 1 1 IN since_dts timestamp with time zone timestamptz NULL PLPGSQL DEFINER user_bender NULL squid push_log_count_since 329059 2 1 OUT days_ago bigint int8 NULL PLPGSQL DEFINER user_bender NULL squid push_log_count_since 329059 3 2 OUT table_name USER-DEFINED citext NULL PLPGSQL DEFINER user_bender NULL squid push_log_count_since 329059 4 3 OUT server_name USER-DEFINED citext NULL PLPGSQL DEFINER user_bender NULL squid push_log_count_since 329059 5 4 OUT push_count bigint int8 NULL PLPGSQL DEFINER user_bender NULL The code for the array_remove_element function is here: CREATE OR REPLACE FUNCTION tools.array_remove_element(anyarray, int) RETURNS anyarray LANGUAGE sql IMMUTABLE AS 'SELECT $1[:$2-1] || $1[$2+1:]'; COMMENT ON FUNCTION tools.array_remove_element(anyarray, int) IS ' From the ever-helpful Erwin Brandstetter, renamed for our schema and idiom. https://dba.stackexchange.com/questions/94639/delete-array-element-by-index '; ALTER FUNCTION tools.array_remove_element (anyarray, int) OWNER TO user_bender; The code for the get_function_owner_name function is found below: CREATE OR REPLACE FUNCTION tools.get_function_owner_name(function_oid oid) RETURNS text LANGUAGE sql AS 'select rolname::text from pg_authid where oid = (select pg_proc.proowner from pg_proc where oid = function_oid)'; ALTER FUNCTION tools.get_function_owner_name(function_oid oid) OWNER TO user_bender;
Morris de Oryx (939 rep)
Jan 23, 2020, 11:54 AM • Last activity: Jan 24, 2020, 04:32 AM
0 votes
1 answers
101 views
ORDER from 2 tables to achieve a specific order returned by an API
There's an external API that returns image similarities, based on an "image ID" and a "image version" (a same image can have multiple versions). The similarity ORDER is given by the API. The API returns something like this: ``` +---------+---------+ | id | version | +---------+---------+ | 37967 | 2...
There's an external API that returns image similarities, based on an "image ID" and a "image version" (a same image can have multiple versions). The similarity ORDER is given by the API. The API returns something like this:
+---------+---------+
|   id    | version |
+---------+---------+
|  37967  |    2    |
|    236  |    1    |
|  37967  |    1    |
|   1413  |    2    |
+---------+---------+
Then I need to retrieve entries in a MySQL database (containing 2 tables), and keep the same ORDER as the one returned by the API. That's where I'm having a problem. Here are the 2 tables:
"img" MySQL table:
+---------+-----------------------+
|   id    | lots of other columns |
+---------+-----------------------+
|    236  |         data          |
|   1413  |         data          |
|  37967  |         data          |
+---------+-----------------------+
"vers" MySQL table:
+---------+---------+
|   id    | version |
+---------+---------+
|    236  |    1    |
|   1413  |    1    |
|   1413  |    2    |
|  37967  |    1    |
|  37967  |    2    |
|  37967  |    3    |
+---------+---------+
the closest result I can get is by using ORDER BY FIELD, but it's still not the same ORDER as the one returned by the API.
My query:

SELECT i.id, v.version
FROM img i
LEFT JOIN vers v ON i.id=v.id 
WHERE ((i.id=37967 AND v.version=2) 
OR (i.id=236 AND v.version=1) 
OR (i.id=37967 AND v.version=1) 
OR (i.id=1413 AND v.version=2)) 
ORDER BY FIELD(i.id, 37967,236,37967,1413), FIELD(v.version,2,1,1,2)
Results:
+---------+---------+
|   id    | version |
+---------+---------+
|  37967  |    2    |
|  37967  |    1    |
|    236  |    1    |
|   1413  |    2    |
+---------+---------+
As you can see, the order is not exactly the one returned by the API :( Any help would be appreciated, thank you all in advance.
Daaaaa (3 rep)
Dec 10, 2019, 12:44 PM • Last activity: Dec 11, 2019, 09:37 AM
0 votes
0 answers
20 views
[Linux LEAP 15.1][mariadb 10.4.3][C API] The UTF8 characters are not handled as before (10.2.9)
This C source code extract is expected to execute a query... but even if I change the name of the table from "Localités" to "Localites", the program no longer display the utf8 characters as the database could when I execute the same query in command line. [This post][1] The situation "before" g...
This C source code extract is expected to execute a query... but even if I change the name of the table from "Localités" to "Localites", the program no longer display the utf8 characters as the database could when I execute the same query in command line. This post The situation "before" gave those results that were expected: Expected situation I really don't know how tell the client (as the server respects the unicode) to display the datas as this program did since upgrade :{
Hurukan Imperial Stepper (1 rep)
Nov 4, 2019, 01:24 AM
5 votes
1 answers
1078 views
System.Web in SQL Server CLR Function
I have done some light research into this topic and I would like to know what are all the pro's and con's or enabling/registering this particular .dll within SQL Server? Back information - we are integrating with a third party application (not my decision, unfortunately) which requires this .dll for...
I have done some light research into this topic and I would like to know what are all the pro's and con's or enabling/registering this particular .dll within SQL Server? Back information - we are integrating with a third party application (not my decision, unfortunately) which requires this .dll for some of it others .dlls. What I need the CLR Function for is to be able to write SQL Queries in SSMS and have that data sent to the third party application's API which then would in turn do the correct data load/changes (inserts and deletes to/from this application has to be done via its API). *EDIT - maybe I shouldve included this detail* When trying to register my c# class I obviously got the error "system.web not registered blah blah blah" which has then in turn prompted my research on this topic. *end edit* So, my conundrum is that to be able to register my C# class/.dll, I have to register all the dependent .dll's, however based on my research I know that this particular one can be quite problematic. So seeing in how I am not terribly familiar with the pitfalls outside of my google research, I was wondering if one of you fine people could help me understand how to make the best decision in regards to this. Also, what else can I add to this post so that being able to give insight is easier? I wasnt too sure the C# code was relevant? I understand this might be a bit broad, but I was hoping that its specific enough to not get flagged? To be more specific to whats occurring here (per Solomon's request) 1. The 3rd party app uses an "API" (used loosely because I am told it is not a great API) to send data back and forth. You'll notice it calls an Importer function which only takes a data table or an excel file which it converts. I have no other option as the the company told me that inserts and deletes via normal XML is terribly slow and has unexpected behavior. 2. The .DLL that references System.Web is referenced within my C# class, which is required to be able to send it data in the first place. 3. In regards to: *Why would you not be able to use the methods I mentioned? They already exist in SQL Server's CLR host. This is for a web service, right* I am not sure I know enough about API's in general to answer this. I very would could and just lack the knowledge and experience to do so. It also might be due to the fact I have limitations on how I can interact with this particular API and I am not sure how those limitations apply to these methods. (I will investigate further and see if I can answer this question myself). Though the more I think about it, I could create a "middle man" class which SQL Server can call, which then would call another class, which would have all the correct references and that might get me past my current situation. I still, however, I am interested in the specific feedback so that I can learn from this. here is my c# class: using Perfion.Api; -- this .DLL references System.Web. using Perfion.Api.Import; using System; using System.Data; using System.Data.SqlClient; using System.IO; using System.Linq; using System.Text; using System.Text.RegularExpressions; using System.Threading.Tasks; namespace PerfInsert { public static class PerfionInsert { public static bool CreateCommand(string tblString, string featureName, string connectionString, string perfionConnectionString, string logFile) { StringBuilder logInfo = new StringBuilder(); try { var wList = new Regex(@"[^0-9a-z\.\[\]_]#", RegexOptions.IgnoreCase); if (wList.IsMatch(tblString)) { logInfo.AppendLine($"{DateTime.UtcNow} - Regex Validation Failed for Table Name!"); return false; } using (SqlConnection connection = new SqlConnection(connectionString)) { var qryString = "SELECT * FROM " + tblString; using (SqlCommand command = new SqlCommand(qryString, connection)) { connection.Open(); using (var dataReader = command.ExecuteReader()) using (var dataTable = new DataTable()) { dataTable.Load(dataReader); PerfionApi api = new PerfionApi(perfionConnectionString); Importer importer = new Importer(api.Connection); importer.Status += (sender, e) => { logInfo.AppendLine($"{DateTime.UtcNow} - {e.Title}"); }; importer.LoadData(dataTable); importer.ImportToDatabase(featureName); } } } return true; } catch (Exception ex) { logInfo.AppendLine($"{DateTime.UtcNow} - {ex.ToString()}"); } finally { File.AppendAllText(logFile, logInfo.ToString()); } return false; } } }
Doug Coats (349 rep)
Oct 28, 2019, 02:06 PM • Last activity: Oct 31, 2019, 09:58 PM
Showing page 1 of 15 total questions