Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
2
votes
1
answers
183
views
Is it safe to rely on transactions in Firebird 2.5 Embedded DB in case of power outage?
How it's safe to say that the following list of action will never be reflected in DB in case of power outage somewhere in the middle of #2 line, before transaction is committed? ``` #1 begin transaction #2 delete record in table A that cascade deletes another record in table B #3 update another reco...
How it's safe to say that the following list of action will never be reflected in DB in case of power outage somewhere in the middle of #2 line, before transaction is committed?
#1 begin transaction
#2 delete record in table A that cascade deletes another record in table B
#3 update another record in table C
#4 commit transaction
Andrei Orlov
(537 rep)
Oct 28, 2020, 02:45 AM
• Last activity: Apr 12, 2024, 02:44 PM
0
votes
0
answers
698
views
Ado net source fail to acquire connection
I have a SSIS package package that connects with azure sql database through using ADO net connection manager with SQLclient data provider. Authentication method used is Integrated security. The SSIs package executes fine through Visual studio 2019. However when executed through Autosys it fails. The...
I have a SSIS package package that connects with azure sql database through using ADO net connection manager with SQLclient data provider. Authentication method used is Integrated security. The SSIs package executes fine through Visual studio 2019. However when executed through Autosys it fails. The error in log file says **Ado net source fails to acquire connection with error message : one or more error occurred.**
The job machine have .net 8 and .net framework 4.8. I searched over internet to find out that SqlClient data provider comes by default in .net framework. System requirements seems to be fine to me. What could be the reason
Agent SKY
(1 rep)
Jan 10, 2024, 03:54 AM
5
votes
1
answers
385
views
Is there some fundamental reason that TransactionScope does not work as expected?
For the following code... using System.Transactions; using Microsoft.Data.SqlClient; var connectionString = "..."; using (var scope = new TransactionScope(TransactionScopeOption.RequiresNew, new TransactionOptions { IsolationLevel = IsolationLevel.Snapshot })) { TestExec(connectionString); TestExec(...
For the following code...
using System.Transactions;
using Microsoft.Data.SqlClient;
var connectionString = "...";
using (var scope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions { IsolationLevel = IsolationLevel.Snapshot }))
{
TestExec(connectionString);
TestExec(connectionString);
scope.Complete();
}
TestExec(connectionString);
static void TestExec(string connectionString)
{
using var conn = new SqlConnection(connectionString);
conn.Open();
var cmd = new SqlCommand
{
CommandText = @"
DECLARE @useroptions table (Opt varchar(50), Value varchar(50));
INSERT @useroptions EXEC ('dbcc useroptions')
SELECT Value AS [isolation level], CURRENT_TRANSACTION_ID() AS [tran_id], @@SPID FROM @useroptions WHERE Opt = 'isolation level'",
Connection = conn
};
using var reader = cmd.ExecuteReader();
while (reader.Read())
Console.WriteLine($"Isolation Level:{reader.GetString(0)} TransactionId:{reader.GetInt64(1)}, Session:{reader.GetInt16(2)}");
}
When the Connection String is for on premise SQL Server then typical output is below
> Isolation Level:snapshot TransactionId:2696760, Session:57
> Isolation Level:snapshot TransactionId:2696760, Session:57
> Isolation Level:snapshot TransactionId:2696821, Session:57
When the Connection String is for Azure SQL Database then typical output is... > Isolation Level:snapshot TransactionId:4636394, Session:92
> Isolation Level:read committed snapshot TransactionId:4636394, Session:92
> Isolation Level:read committed snapshot TransactionId:4636446, Session:92 Neither output is really what is hoped for. In the "On Premise" case the isolation level is not reset when the scope ends and a new transaction starts. This causes many complaints (such as this StackOverflow question ). In the "SQL Database" case the isolation level is reset but happens too early and is not maintained for the whole scope. I came across this behaviour as in my case the first query does a read and the second a write and the code was relying on optimistic concurrency exceptions which never came (as the write query was not running at snapshot as expected). It looks as though the "on premise" behaviour was temporarily the same as the SQL database behaviour but this was rolled back . This second bug is reported here . Can anyone shed any light on this matter? * Is the difference in behaviour due to differing implementations in the server (of
> Isolation Level:snapshot TransactionId:2696760, Session:57
> Isolation Level:snapshot TransactionId:2696821, Session:57
When the Connection String is for Azure SQL Database then typical output is... > Isolation Level:snapshot TransactionId:4636394, Session:92
> Isolation Level:read committed snapshot TransactionId:4636394, Session:92
> Isolation Level:read committed snapshot TransactionId:4636446, Session:92 Neither output is really what is hoped for. In the "On Premise" case the isolation level is not reset when the scope ends and a new transaction starts. This causes many complaints (such as this StackOverflow question ). In the "SQL Database" case the isolation level is reset but happens too early and is not maintained for the whole scope. I came across this behaviour as in my case the first query does a read and the second a write and the code was relying on optimistic concurrency exceptions which never came (as the write query was not running at snapshot as expected). It looks as though the "on premise" behaviour was temporarily the same as the SQL database behaviour but this was rolled back . This second bug is reported here . Can anyone shed any light on this matter? * Is the difference in behaviour due to differing implementations in the server (of
sp_reset_connection
) or the client?
* If it is Server Side was this a deliberate design decision to fix the issue on premise as per KB3025845 but leave Azure behaving differently?
* If that was a deliberate design decision is there some fundamental reason why we can't get both cases working as hoped for?
Martin Smith
(87941 rep)
Jul 3, 2023, 06:23 PM
• Last activity: Jul 3, 2023, 11:24 PM
3
votes
1
answers
432
views
MSSQL Server XE column client_connection_id accessible through system view/table for auditing?
In our .NET application we create a `SqlConnection` to talk to the database. After the connection is opened, we get the [ClientConnectionId][1], a GUID. Within the extended events I can manage to trace the information what happens on the database regarding this `ClientConnectionId` via the column `c...
In our .NET application we create a
SqlConnection
to talk to the database.
After the connection is opened, we get the ClientConnectionId , a GUID.
Within the extended events I can manage to trace the information what happens on the database regarding this ClientConnectionId
via the column client_connection_id
(for e.g. in the sql_statement_completed
event).
Looking at the sys.dm_exec_sessions
or the sys.dm_exec_connections
I can't find a matching to the client_connection_id
from the extended event.
In sys.dm_exec_connnections
is a column connection_id
but the uuid differs from the one created by the SqlClient
in .NET.
### Question ##
Where does the SQL Server store the client_connection_id (table/view) and is this accessible without creating an extended event?
### Background ###
Our support team still uses the SQL Profiler and don't know anything about extended events. The profiler is good enough to get some small insight what happens on the database which then can be handed over to the developing team for further investigations.
Usually we used the SPID, but there will be some changes made to the application where we will use connection pooling in the future and with ef core
there will be no permanent connection opened for each client, but rather for each db query a new connection.
Now I am trying to get a solution in how to be still possible to audit the database for each client (and not too complex for the support team).
Manipulate the application name
or the hostname
in the connectionstring would be another possibility, but I doubt this is recommended.
Any thoughts and help are warmly appreciated.
rogaa
(131 rep)
Nov 14, 2019, 10:39 AM
• Last activity: Mar 4, 2023, 07:05 AM
39
votes
6
answers
37411
views
Make SqlClient default to ARITHABORT ON
First things first: I'm using MS SQL Server 2008 with a database at compatibility level 80, and connecting to it with .Net's `System.Data.SqlClient.SqlConnection`. For performance reasons I've created an indexed view. As a result, updates to tables referenced in the view need to be done with `ARITHA...
First things first: I'm using MS SQL Server 2008 with a database at compatibility level 80, and connecting to it with .Net's
System.Data.SqlClient.SqlConnection
.
For performance reasons I've created an indexed view. As a result, updates to tables referenced in the view need to be done with ARITHABORT ON
. However, the profiler shows that SqlClient is connecting with ARITHABORT OFF
, so updates to those tables are failing.
Is there a central configuration setting to make SqlClient use ARITHABORT ON
? The best I've been able to find is to manually execute that each time a connection is opened, but updating the existing code base to do this would be a fairly large task so I'm keen to find a better way.
Peter Taylor
(531 rep)
May 3, 2011, 04:17 PM
• Last activity: Feb 2, 2023, 08:14 AM
4
votes
2
answers
40016
views
Currently all connections getting "Timeout expired"
I have an application which connects to SQL Server. Currently I am getting, Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. When I ran, SELECT DB_NAME(dbid) as...
I have an application which connects to SQL Server. Currently I am getting,
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
When I ran,
SELECT
DB_NAME(dbid) as DBName,
COUNT(dbid) as NumberOfConnections,
loginame as LoginName
FROM
sys.sysprocesses
WHERE
DB_NAME(dbid) ='MyDb'
GROUP BY
dbid, loginame
DBName NumberOfConnections LoginName
MyDb 10 sa
MyDb 109 MyUser
The status of all process is
sleeping
and cms is AWAITING COMMAND
Here is my code,
private async Task ExecuteAsync(ExecutionType executionType, CommandType commandType, string commandText, IsolationLevel isolationLevel, SqlParameter[] parameters, Func callback = null)
{
var stopwatch = new Stopwatch();
stopwatch.Start();
using (var connection = new SqlConnection(_settings.DatabaseConnectionString))
{
using (var command = new SqlCommand(commandText, connection) {CommandType = commandType})
{
command.Parameters.AddRange(parameters);
await connection.OpenAsync().ConfigureAwait(false);
command.CommandTimeout = _settings.CommandTimeout;
var transaction = connection.BeginTransaction(isolationLevel);
command.Transaction = transaction;
try
{
object result;
switch (executionType)
{
case ExecutionType.Reader:
var reader = await command.ExecuteReaderAsync().ConfigureAwait(false);
using (reader)
{
var list = new List();
while (reader.Read())
{
if (callback != null)
{
var item = callback(reader);
if (item != null)
{
list.Add(item);
}
}
}
result = list;
}
break;
case ExecutionType.NonQuery:
result = await command.ExecuteNonQueryAsync().ConfigureAwait(false);
break;
default:
result = await command.ExecuteScalarAsync().ConfigureAwait(false);
break;
}
transaction.Commit();
stopwatch.Stop();
var elapsed = stopwatch.Elapsed;
if (elapsed.Seconds > 2)
{
_logger.Log(string.Format("{0} took {1} time", command.CommandText, elapsed));// only log if it tooks more than 2 seconds
}
return result;
}
catch (Exception exception)
{
_logger.Log(exception);
transaction.Rollback();
throw;
}
}
}
}
Imran Qadir Baksh - Baloch
(1319 rep)
Oct 29, 2014, 10:47 AM
• Last activity: Jan 27, 2023, 01:05 PM
0
votes
1
answers
466
views
Looking for file-based DB engine alternative to MS Access
My company has been chained to this MS Access '97 .MDB format we've had since the dawn of time. Our schema is hierarchical, with multiple one-to-many relationships. We are at a point where we are looking for alternatives, as the slowness and overall clunkiness of Access are beginning to take their t...
My company has been chained to this MS Access '97 .MDB format we've had since the dawn of time. Our schema is hierarchical, with multiple one-to-many relationships. We are at a point where we are looking for alternatives, as the slowness and overall clunkiness of Access are beginning to take their toll on our productivity.
Our "modern" methods of accessing the database involve DAO.Net and heavy amounts of hash-based caching. The .NET System.Collections.Generic.Dictionary type has been a god-send here, because without it, I don't know how we would get our work done in a timely manner. We have multiple projects that each have a database file associated with it (sometimes multiple), and we tend to interact them one of two ways: either the DB is created by hand (using our in-house editor), or generated using a program which takes data we receive from another company, in some other format, and converts it to our format.
In both cases, our common .NET library loads the entire database into hash tables via Dictionary and resolves the object relationships with code by looking up values in the hash table by ID. When auto-generating the database, we use another set of hash tables to determine whether an object exists already in the cache before adding it. Once we are finished parsing the source data, we start a multi-threaded bulk insertion operation. We do all this because any other method of accessing the database is very slow.
I hope I've given enough context to my question: Is there a DB engine out there whose query speed can rival that of hash tables like what I am using? Memory and disk usage are no concern, these DB only exist on developer machines, we convert them to a different format for use with our software. I just want to get rid of my hash tables, but I don't want to sacrifice speed to do it.
Kate M
(3 rep)
Aug 24, 2022, 01:24 AM
• Last activity: Aug 24, 2022, 03:54 AM
2
votes
1
answers
2216
views
Why does my application have over 20,000 idle (sleeping) connections in SQL Server?
I have an analytics application running with .NET 6 in a Linux Kubernetes container. It receives http messages and logs them. For my data layer, I'm using the `Microsoft.Data.SqlClient` v4.1.0 library, and straight ADO.NET connections - like this: ```cs var connString = ConnectionService.GetConnecti...
I have an analytics application running with .NET 6 in a Linux Kubernetes container. It receives http messages and logs them.
For my data layer, I'm using the
Microsoft.Data.SqlClient
v4.1.0 library, and straight ADO.NET connections - like this:
var connString = ConnectionService.GetConnectionString();
using var conn = new SqlConnection(connString);
using var cmd = new SqlCommand(...sql..., conn);
cmd.Parameters.Add("@....", SqlDbType...).Value = ...;
try
{
await cmd.Connection.OpenAsync();
await cmd.ExecuteNonQueryAsync();
}
catch (Exception ex)
{
Log.Error(ex);
await cmd.Connection.CloseAsync();
throw;
}
await cmd.Connection.Close();
The analytics service is averaging about 250 writes per second. It's currently writing to SQL Server 2017 (there is a plan to upgrade, but it hasn't happened yet).
Watching SQL Server Profiler, the writes take from 1 ms to 6 ms. Never anything long running. Checking for sleeping sessions with this query:
SELECT login_name,
COUNT(*) [sleeping_sessions]
FROM sys.dm_exec_sessions
WHERE database_id > 0
AND status = 'sleeping'
GROUP BY login_name
ORDER BY 2 DESC
As of this writing, there are 23,173 sleeping connections.
I ran a closed test on a dev server, and I can see that when the application opens a connection and closes it, the connection is left in a sleeping state. It only disappears when the application is stopped.
My understanding is that this is how ADO.NET "connection pools" work. However, I was under the impression there was a 100 connection limit, unless otherwise increased.
I changed the SQL process to run synchronously:
try
{
cmd.Connection.Open();
cmd.ExecuteNonQuery();
}
catch (Exception ex)
{
Log.Error(ex);
cmd.Connection.Close();
throw;
}
cmd.Connection.Close();
But this didn't change anything. Still 20k+ sleeping connections.
I'm taking advantage of a C# 8.0 feature called a using statement , on both the connection and the command object. It's semantically equivalent of wrapping the code block in a using
with {}
.
I'm using a single connection string, identical for each, and the same security context. The connection string is pulled from an app.config
setting.
I tried an old-school using
block + synchronous + max pool size=100 in the connection string, but had no impact. Still 20k+ sleeping sessions. It starts at 0 and keeps rising but eventually levels out.
Does anyone have any recommendations as to what steps I should take? SQL Server only allows a max of 32,767 connections, so I'm getting close to the danger zone.
Jake McGraw
(51 rep)
Jun 3, 2022, 03:29 PM
• Last activity: Jun 22, 2022, 04:56 PM
1
votes
0
answers
120
views
Can DST transitions cause "random" query timeouts?
I have encountered a query timeout ("Execution Timeout Expired") on a sub-second query on a connection set to 30 seconds timeouts. Client side logging indicated that 4 days elapsed between submitting the query and receiving the timeout. The client and the server were both running as part of the same...
I have encountered a query timeout ("Execution Timeout Expired") on a sub-second query on a connection set to 30 seconds timeouts. Client side logging indicated that 4 days elapsed between submitting the query and receiving the timeout. The client and the server were both running as part of the same virtual machine.
The root cause was fairly evident: the virtual machine, having been suspended over several days, synchronized its system time via NTP and jumped 4 days ahead in time and the system time adjustment propagated to SQL Server just as it was executing the unfortunate query. However, this incident got us thinking.
**Can transitions to daylight saving time cause random query timeouts, by spuriously adding 1 hour to the measured execution time of the query?**
**Can other time zone changes cause random query timeouts?**
I am hoping that SQL Server would use UTC timestamps to detect query timeouts and thus be unaffected by time zone changes. But I am unable to find any documentation to this effect.
Jirka Hanika
(182 rep)
May 2, 2022, 09:04 AM
• Last activity: May 2, 2022, 04:28 PM
2
votes
1
answers
2830
views
Trying to get a large amount of data (column type is "text") from an MS SQL Server
I want to get like 300MB of data (`text` column) out of an MS SQL Server, but with the library I'm using I'm getting an out-of-memory exception as soon as I try to access the field. Unfortunately, I'm bound to that library and cannot switch to something else. I've tried getting chunks using `SUBSTRI...
I want to get like 300MB of data (
text
column) out of an MS SQL Server, but with the library I'm using I'm getting an out-of-memory exception as soon as I try to access the field. Unfortunately, I'm bound to that library and cannot switch to something else.
I've tried getting chunks using SUBSTRING()
, but that returns varchar
and the max len is 8000, so getting 300MB in chunks of 8K would take forever.
Is there any other way to do this? If I could get that 300MB in 3x 100MB chunks that would be fine, 100MB don't seem to throw an exception. Maybe to somehow split the data server-side?
MS SQL Server Version is 14.0.3436.1
Thanks in advance
BETSCH
(23 rep)
Apr 28, 2022, 10:06 AM
• Last activity: Apr 28, 2022, 11:24 AM
1
votes
2
answers
400
views
How can one authenticate with AzAD while using Read-SqlTableData to read AzSQL tables
How should one specify the `User ID` while creating the `$sqlcc` object when using [Read-SqlTableData][1] to read a table in [AzSQL with an Azure account][2]? ```tsql $sqlcc = new-object ('System.Data.SqlClient.SqlConnection') "Data Source= .database.windows.net;User ID= @ .onmicrosoft.com;Password=...
How should one specify the
Expecting screenshot or screenclip GIF that the answer code is working for an AzAD **user**id on AzSQL using
User ID
while creating the $sqlcc
object when using Read-SqlTableData to read a table in AzSQL with an Azure account ?
$sqlcc = new-object ('System.Data.SqlClient.SqlConnection') "Data Source= .database.windows.net;User ID=@.onmicrosoft.com;Password="
$sc = new-object ('Microsoft.SqlServer.Management.Common.ServerConnection') $sqlcc
$srv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $sc
$db = $srv.Databases["TestDB"]
$table = $db.Tables["TestTable"]
Read-SqlTableData -TopN 10 -InputObject $table
The code above is working for SQL ids but not for Azure ids. Specifying the User ID
like @.onmicrosoft.com
or @
causes the $srv
object to be blank.

Read-SqlTableData
.
Ayan Mullick
(115 rep)
Mar 26, 2022, 02:58 AM
• Last activity: Apr 14, 2022, 01:06 PM
0
votes
0
answers
124
views
ADO.Net SqlBulkCopy, concurrency
I inherited a program that has to read a number of large jsonl zips, deserialize the json, normalize it out into a DataSet and then use SqlBulkCopy to save the results to a Sql Server database. Originally, it would have X number of threads processing each file separately but the performance (time an...
I inherited a program that has to read a number of large jsonl zips, deserialize the json, normalize it out into a DataSet and then use SqlBulkCopy to save the results to a Sql Server database.
Originally, it would have X number of threads processing each file separately but the performance (time and memory consumption) was very jaggy. It would read, 50000 or 100000 objects from the file and transform them into the DataSet, and then it would wait for SqlBulkCopy to complete synchronously in a loop.
I wanted to introduce more parallelism both on the file reading and the db saving, so I added a db writer background thread. The file reading threads put 20000 json objects into DataSets and then put the DataSet into a BlockingCollection. The writer thread pulls the DataSets off and does the SqlBulkCopy calls (for some reason, maybe related to the issue I have a question about), the old code tried to randomize the order of the DataTables' SqlBulkCopy calls from the DataSets.
Anyway, most of the time the new parallelization works nicely to smooth things out but when I have a larger than usual pile of files (all the threads busy for longer), occasionally the number of main rows in Sql Server won't equal the total from the manifest. The two times it has happened, oddly, it's been increments of *10000* rows that go missing (not the 20000 objects in each batch, and not the random dregs from the last run)
I'm not seeing where my concurrency hole is here. Are multiple SqlBulkCopy operations to the same table from different threads in the app colliding?
This is the background dbwriter thread code I put in place
private static BlockingCollection datasetQueue = new BlockingCollection();
stopWriterToken = new CancellationTokenSource();
var token = stopWriterToken.Token;
writingThread = Task.Factory.StartNew(() =>
{
long timeSpentDbWrites = 0;
var writers = new List(); // throw each DataSet on its own thread for writing
for (; ; )
{
try
{
writers.RemoveAll(w => w.IsCompleted); // thin the list of completed writes
DataSet ds = null;
if (datasetQueue.TryTake(out ds, -1, token))
writers.Add(Task.Run(() =>
{
var sw = Stopwatch.StartNew();
var r = new Random();
using (var sqlConn = new SqlConnection(ConnectionString))
{
sqlConn.Open();
foreach (var i in Enumerable.Range(0, count).OrderBy(x => r.Next()))
{
var table = set.Tables[i];
if (table.Rows.Count > 0)
{
using (var sqlBulkCopy = new SqlBulkCopy(sqlConn)
{
BulkCopyTimeout = Timeout,
DestinationTableName = table.Name,
BatchSize = ToSqlBatchSize
})
{
sqlBulkCopy.WriteToServer(table);
}
table.Clear();
}
}
}
Interlocked.Add(ref timeSpentDbWrites, sw.ElapsedTicks);
}));
}
catch (Exception e)
{
Logger.Error($"Exception processing result sets in background thread: {e.Message}");
}
if ((datasetQueue.Count == 0 && datasetQueue.IsCompleted) || token.IsCancellationRequested)
break;
}
Task.WaitAll(writers.ToArray()); // wait for all outstanding writers to finish
Logger.Info($"Cumulative time on async DB Writes: {((timeSpentDbWrites * 1000) / Stopwatch.Frequency).ToString()} ms");
}, token);
user1664043
(379 rep)
Feb 18, 2022, 09:35 PM
• Last activity: Feb 18, 2022, 10:23 PM
7
votes
1
answers
15136
views
Are there other ways to select a dynamic list of columns?
I need to let my users specify the list of columns they want to select. So far I know two ways of accomplishing that. **1. Using refcursors** CREATE OR REPLACE FUNCTION selecttestwithcolumnlist( ticker character varying, columnlist character varying) RETURNS refcursor AS $BODY$ DECLARE ref1 refcurso...
I need to let my users specify the list of columns they want to select. So far I know two ways of accomplishing that.
**1. Using refcursors**
CREATE OR REPLACE FUNCTION selecttestwithcolumnlist(
ticker character varying,
columnlist character varying)
RETURNS refcursor AS
$BODY$
DECLARE
ref1 refcursor;
BEGIN
OPEN ref1 FOR EXECUTE
'select ' || ColumnList || ' from Prices WHERE Ticker=$1;'
USING Ticker;
RETURN ref1;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
This function is very easy to invoke from my Ado.Net client. All I need to do is pass the parameters. However, if I want to test this function from pgAdmin, the result set is open on screen only if I keep my transaction open. This is inconvenient. Of course, it is easy to expose the data as an HTML table or an Excel spreadsheet, but this is kind of a minor inconvenience.
**2. Using setof records**
CREATE OR REPLACE FUNCTION SelectPrices(colList VARCHAR)
RETURNS SETOF record AS
$func$
BEGIN
RETURN QUERY EXECUTE
'SELECT ' || colList || ' FROM prices ORDER BY Ticker, ASOfDate';
END
$func$ LANGUAGE plpgsql;
Unfortunately, this complicates my client code. I cannot issue a simple SELECT like this:
SELECT price,AsOfdate,ticker FROM SelectPrices('price,AsOfdate,ticker') ;
I must explicitly provide the structure of my result set:
SELECT price,AsOfdate,ticker FROM SelectPrices('price,AsOfdate,ticker')
AS f(price NUMERIC,AsOfdate TIMESTAMP,ticker VARCHAR);
This is doable, but inconvenient.
Are there other ways to return dynamic column lists?
**Edit** to protect against SQL injection, I typically split the comma-separated list and join it against a system view. Anything that is not an actual column name is not returned. I did not mention that originally, just to keep the question short.
A-K
(7444 rep)
Nov 8, 2013, 09:42 PM
• Last activity: Feb 10, 2022, 05:46 PM
2
votes
0
answers
83
views
Using ADONET SqlClient library how to access the plain-text value of an encrypted column in SQL Server?
In a SQL Server 2017 database running on my development PC, a column in a particular table is encrypted with a column encryption key. CREATE TABLE Test(         id int IDENTITY(1,1) NOT NULL PRIMARY KEY,         patientid int,    ...
In a SQL Server 2017 database running on my development PC, a column in a particular table is encrypted with a column encryption key.
CREATE TABLE Test(
id int IDENTITY(1,1) NOT NULL PRIMARY KEY,
patientid int,
myencryptedcolumn varbinary(2000)
ENCRYPTED WITH (ENCRYPTION_TYPE = DETERMINISTIC,
ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256',
COLUMN_ENCRYPTION_KEY = MYCEK) NOT NULL
)
In a stored procedure I'm able to decrypt the value using the column encryption key. But I'm also trying to test the ability of the ADO.NET SqlClient library to retrieve the plain-text value transparently and am not having any success there: the encrypted value is always returned to the client program.
Following this Microsoft documentation I've granted the following permissions to
[{My Dev PC Name}\{MyAccount}]
:
> **VIEW ANY COLUMN MASTER KEY DEFINITION**
> **VIEW ANY COLUMN ENCRYPTION KEY DEFINITION**
But that user has **VIEW ANY DEFINITION** already, as confirmed in SSMS Security->Logins-Properties->Securables.
And then, using the ADO.NET SqlClient library, I open a connection to the database using Integrated Security
when logged in to the dev PC as the account mentioned above and with Column Encryption Setting=enabled
tag in the connection string.
The SqlClient.SqlCommand object in the client app has these properties:
cmd.CommandText = "SELECT myencryptedcolumn FROM Test where id = 100";
cmd.CommandType = System.Data.CommandType.Text;
Multiple tuples are returned by the query and piped into a System.Data.DataTable:
var DA = new SqlDataAdapter();
var T = new System.Data.DataTable();
DA.SelectCommand = cmd;
DA.Fill(T);
string plaintext = BitConverter.ToString((byte[])T.Rows); -- get col value from 1st row
But variable plaintext
always contains a hex string representing the encrypted value not the plain-text value.
Unless I am misreading the Microsoft documentation, the ADO.NET client library should be able to decrypt myencryptededcolumn
client-side since the user executing the query (i.e. me, per the Integrated Security setting in the connection string) has the required permissions.
What am I overlooking? Am I supposed to be explicitly fetching the column encryption key and/or its definition and injecting that into some object in the ADO.NET client library? Is there another database permission involved?
Tim
(545 rep)
Dec 8, 2021, 04:31 PM
1
votes
1
answers
887
views
CREATE DATABASE permission denied in database 'master'. Cannot attach
In a T-SQL script, using a connection string like this: ``` Data Source=(LocalDb)\MSSQLLocalDB;AttachDbFilename=C:\Temp\ContextTest.mdf;Initial Catalog=ContextTest;Integrated Security=True ``` ... I ... 1. create a SQL Server LocalDB database 1. create a login and add a user to the database I create...
In a T-SQL script, using a connection string like this:
Next, I'm using SSMS to inspect everything, **using the credentials I just created**. When I connect in SSMS using the newly created user, I can easily query the
Then, if I do the same in .NET, I get an error message claiming I'd require CREATE DATABASE rights:
Data Source=(LocalDb)\MSSQLLocalDB;AttachDbFilename=C:\Temp\ContextTest.mdf;Initial Catalog=ContextTest;Integrated Security=True
... I ...
1. create a SQL Server LocalDB database
1. create a login and add a user to the database I created
1. I don't add any database roles to the user.
1. I create a table and grant SELECT, INSERT, UPDATE, DELETE, REFERENCE rights to the user
The script looks similar to this:
CREATE LOGIN [U] WITH PASSWORD = '***'
CREATE USER [U] FROM LOGIN [U]
CREATE TABLE [U].[TestEntities]
( Id INT PRIMARY KEY IDENTITY
, Name NVARCHAR(100) NOT NULL UNIQUE
)
GRANT INSERT, DELETE, SELECT, REFERENCES, UPDATE ON [U].[TestEntities] TO [U];
Next, I'm using SSMS to inspect everything, **using the credentials I just created**. When I connect in SSMS using the newly created user, I can easily query the
[U].[TestEntities]
table using a query like this:
SELECT COUNT(*) FROM U.TestEntities

Then, if I do the same in .NET, I get an error message claiming I'd require CREATE DATABASE rights:
using namespace System.Data.SqlClient
try {
$con = [SqlConnection]::new('Data Source=(LocalDb)\MSSQLLocalDB;AttachDbFilename=C:\Temp\ContextTest.mdf;Initial Catalog=ContextTest;User ID=U;Password=***')
$cmd = $con.CreateCommand()
$cmd.CommandText = 'SELECT COUNT(*)
FROM [U_MCL].[TestEntities] AS [t]'
$con.Open()
$cmd.ExecuteScalar()
} finally {
$con.Dispose()
}
MethodInvocationException: C:\Temp\Test.ps1:9:2
Line |
9 | $con.Open()
| ~~~~~~~~~~~
| Exception calling "Open" with "0" argument(s): "CREATE DATABASE permission denied in database 'master'. Cannot attach the file
| 'C:\Temp\ContextTest.mdf' as database 'ContextTest'."
What am I missing?
AxD
(133 rep)
Oct 19, 2021, 11:25 PM
• Last activity: Oct 20, 2021, 01:49 AM
1
votes
1
answers
142
views
Significant performance differences executing alter and grant statements between users with same permissions
I am experiencing significant performances differences between users with the same permissions when executing `alter` and `grant` statements. I have an automated build system that runs a series of SQL scripts for database changes - mostly `alter table`, `alter procedure`, with relevant `grant` state...
I am experiencing significant performances differences between users with the same permissions when executing
alter
and grant
statements.
I have an automated build system that runs a series of SQL scripts for database changes - mostly alter table
, alter procedure
, with relevant grant
statements.
While trying to change some things in my build process (including running under a different user account), I noticed that the current account running the scripts executes them up to 100x faster than any other user - even users with exactly the same permissions, and belonging to the same Active Directory group. I've tested SQL logins and Windows Authentication, with both results being basically identical.
The build process is run through a C# script in all cases. The script properly opens the connection only once, executes all the statements, then closes. This happens the same for all users. The script itself is called by a batch file, which in turn is called by a CI tool.
The average execution time for the current build user running this script is 12ms per statement, while the average execution time for any other user is 275ms per statement.
What possible causes could be contributing to this performance difference?
vbnet3d
(111 rep)
Apr 28, 2021, 09:10 PM
• Last activity: May 6, 2021, 04:50 PM
0
votes
1
answers
2275
views
Exception when calling Postgresql function returning a local numeric from C#
I wrote a Postgresql function to insert a row and return the value generated from the identity column. I receive the following exception when trying to call it from C#. Npgsql.PostgresException: '42601: query has no destination for result data' I have looked around for an answer, and it seems Execut...
I wrote a Postgresql function to insert a row and return the value generated from the identity column. I receive the following exception when trying to call it from C#.
Npgsql.PostgresException: '42601: query has no destination for result data'
I have looked around for an answer, and it seems ExecuteScalar has worked for others, but in all the examples I've seen, it's usually when using RETURN QUERY, not a local variable. What am I missing?
Here is the function:
CREATE OR REPLACE FUNCTION public.func_insert_item(_name character varying)
RETURNS BIGINT
LANGUAGE plpgsql
AS $function$
DECLARE
_item_id BIGINT;
BEGIN
INSERT INTO public.items
(
name
)
VALUES
(
_name
)
RETURNING _item_id;
RETURN _item_id;
END;
$function$
Here is the C#:
static NpgsqlParameter CreateParameter(string name, ParameterDirection direction, string value)
{
var parameter = new NpgsqlParameter(name, NpgsqlTypes.NpgsqlDbType.Varchar, value.Length);
parameter.Direction = direction;
parameter.Value = value;
return parameter;
}
static void Main(string[] args)
{
using var connection = new NpgsqlConnection(connectionString.ToString());
connection.Open();
using var command = new NpgsqlCommand("func_insert_item", connection);
command.CommandType = CommandType.StoredProcedure;
command.Parameters.Add(CreateParameter("_name", ParameterDirection.Input, name));
object itemId = command.ExecuteScalar();
}
TheCodeFiend
(3 rep)
Feb 13, 2021, 04:43 AM
• Last activity: Feb 13, 2021, 06:04 AM
0
votes
1
answers
104
views
Does Sql Server caches parameterized anonymous blocks?
"Anonymous block" is a term from Oracle. But speaking of the thing for sql server ``` sql Declare .... Begin begin trans . . . commit End ``` If I send this SQL in parameterized form to the server via ADO.NET, will it be cached and reused or it will be recompiled every time?
"Anonymous block" is a term from Oracle. But speaking of the thing for sql server
sql
Declare
....
Begin
begin trans
. . .
commit
End
If I send this SQL in parameterized form to the server via ADO.NET, will it be cached and reused or it will be recompiled every time?
T.S.
(216 rep)
Feb 8, 2021, 07:12 PM
• Last activity: Feb 8, 2021, 07:48 PM
6
votes
3
answers
1593
views
Is the ADO.NET Entity Framework known for generating poorly performing queries?
This question is inspired by [the comment][1] posted to [the latest ServerFault blog post][2]: > Are you guys still using LINQ to SQL? I know I can use SQL Server Profiler and/or [the `ToTraceString` method][3] to see the generated queries and analyze them myself. However, I am looking for opinions...
This question is inspired by the comment posted to the latest ServerFault blog post :
> Are you guys still using LINQ to SQL?
I know I can use SQL Server Profiler and/or the
ToTraceString
method to see the generated queries and analyze them myself. However, I am looking for opinions from **people with hands-on experience with administering databases accessed by applications utilizing the Entity Framework**.
Are Entity Framework queries a common cause of performance problems?
Can LINQ queries be optimized in such cases or is raw Transact-SQL the only solution?
Marek Grzenkowicz
(1495 rep)
Feb 15, 2011, 09:02 PM
• Last activity: Feb 4, 2021, 08:36 AM
0
votes
2
answers
878
views
Query producing radically different results running in ado.net vs SSMS
Top level: I have this query I'm running. It's been running for several weeks exactly as I expected, but the last couple of days I'm getting pretty random incorrect results when I run the query through ADO.Net but still getting the "correct" results when I run the exact same query in SSMS, and I am...
Top level: I have this query I'm running. It's been running for several weeks exactly as I expected, but the last couple of days I'm getting pretty random incorrect results when I run the query through ADO.Net but still getting the "correct" results when I run the exact same query in SSMS, and I am at a loss to see why.
I'm writing a process to harvest SEC filings from the Edgar site. Every night, the SEC publishes a small text file with the "light metadata" of the filings done on a given day (id for the company it's about, what type of filing it is, the id number of the filing). You can whip through these really fast and get the general idea of what's happening. It's the first stage of the project I wrote, and I could get the light metadata this way for 27 years or so of filings loaded in about an hour.
The thing that's really time consuming is fetching the whole filing and parsing that. As that side of the project has grown, every day I want to get the new data and then fetch and process a few extra full filings to backfill the filings I know about from the light meta data.
Another thing the process has to accommodate is that a company can submit a filing on Monday and upload a new version on Friday, or next week, or next year. So I have to recognize when I've gotten an update to a filing I've processed before.
In other words, people care about the more recent stuff, so I want to keep current and fill in the old stuff as time allows.
So I have the following query that, in high level terms, asks "give me the top X filings that a) I haven't fully processed before or b) got and update and I need to reprocess it"
FilingIndex is the table that gets the light metadata from the nightly file SEC posts about the daily activity
FilingCiks (Cik being a govt company id) is a one-to-many table about the companies referenced by the filing (at least one of these ids is required to fetch the full text for processing
FilingContent is my table where I keep the information from the full text I've fetched and parsed for a file.
So every night I run the process and get the list of "what's new" for today; that goes into FilingIndex and FilingCiks.
Then I run this query, saying "in order of the newest filings I've heard about, get me what I need to call the SEC for the full text". @max is the number of filings I found in the prior step + some number of thousands to help backfill.
I expect to see a list of all the filings I just got from the previous step, followed by entries where my backfilling last left off. So the first 5000 rows are today's/last night's filings, the next 5000 may pick up somewhere in 2018 where I left off.
I've processed a couple years of filings this way and the results have been what I expected every time. But yesterday for no reason I can find, the query below started returning scattershot results when run through ADO.Net but still returns what I expect in SSMS.
Through ADO.Net, the results are still reverse chron, but a) they don't start with the newest, and b) they skip lightly through the existing rows.
Any ideas why a) ADO.Net and SSMS would run so differently and b) why the ADO.Net run would just start behaving differently in the last day or so?
As I've said, I've been running this process for a while. I've back and filled back to August 2018 and the query was running just as I expected. But now it's jumping around in the calendar.
DECLARE @max int = 5000;
SELECT TOP (@max) idx.accession_number, ciks.cik, cast(case when idx.fetched < idx.crawlerdate then 1 ELSE 0 END as bit) as cleanup, idx.CrawlerDate
FROM
(SELECT idx.accession_number, idx.crawlerdate, idx.filingType, ROW_NUMBER() OVER (PARTITION BY idx.accession_number ORDER BY idx.crawlerdate DESC) AS rn, content.fetched
FROM raw.FilingIndex idx
LEFT JOIN raw.FilingContent content ON content.accession_number = idx.accession_number
WHERE idx.accession_number < 999999999700000000 -- paper filings with no good data to mine
AND (content.accession_number is null OR content.Fetched < idx.crawlerdate)
) idx
CROSS APPLY (SELECT TOP 1 cik from raw.filingciks ciks WHERE ciks.accession_number idx.accession_number AND ciks.crawlerdate = idx.crawlerdate) ciks
WHERE idx.rn = 1
ORDER BY idx.crawlerdate DESC, idx.Accession_Number asc
user1664043
(379 rep)
Jan 8, 2021, 10:54 PM
• Last activity: Jan 11, 2021, 10:31 PM
Showing page 1 of 20 total questions