Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
3386 views
No connection possible after TrustServerCertificate=True => The certificate chain was issued by an authority that is not trusted
I have a machine which could connect to an MS SQL Server in the past. I have SSMS installed and the connection was possible when I activated TrustServerCertificate checkbox in SSMS. I could connect and see databases and tables and execute statements etc... Later I tried to scaffold models with entit...
I have a machine which could connect to an MS SQL Server in the past. I have SSMS installed and the connection was possible when I activated TrustServerCertificate checkbox in SSMS. I could connect and see databases and tables and execute statements etc... Later I tried to scaffold models with entity framework with the command: dotnet ef dbcontext scaffold "Server=xxx;Database=yyy;User Id=zzz;Password=abcd;TrustServerCertificate=true;" Microsoft.EntityFrameworkCore.SqlServer --table xxx.yyy There was a prompt which said i could create or install (I cant remember which it was) a certificate. I accepted the prompt and after that the connection is not possible anymore. Even with SSMS the connection is not possible anymore. I think entity framework or the machine has created a certificate and now the connection is not possible because the created certificat is not trusted by the sql server. How do i get the connection working again? If i try to connect via SSMS i get the message: > > Cannot connect to xxx. > > ------------------------------ ADDITIONAL INFORMATION: > > A connection was successfully established with the server, but then an > error occurred during the login process. (provider: SSL Provider, > error: 0 - The certificate chain was issued by an authority that is > not trusted.) (Microsoft SQL Server, Error: -2146893019) > > For help, click: > https://docs.microsoft.com/sql/relational-databases/errors-events/mssqlserver--2146893019-database-engine-error > > ------------------------------ > > The certificate chain was issued by an authority that is not trusted > > ------------------------------ If i activate trust server certificate the following message is shown: > Cannot connect to xxx. > > ------------------------------ ADDITIONAL INFORMATION: > > Login failed for user 'zzz'. (Microsoft SQL Server, Error: > 18456) > > For help, click: > https://docs.microsoft.com/sql/relational-databases/errors-events/mssqlserver-18456-database-engine-error > How can i remove this certificate? Where is it stored on the machine? How can i connect again? I have no acces to the remote SQL Server. I did not find any certificate in "computer certificates" or "user certificates" on the windows machine which may be indicating that this was the created certificate. There are so many installed certificates on this machine. Thanks in advance
Sebastian Siemens (111 rep)
Aug 31, 2023, 10:53 AM • Last activity: May 9, 2025, 09:01 PM
1 votes
1 answers
630 views
Execution timeout exception in ASP.NET core
My problem is a quite hard to define. We have an SQL Server (version 15.0.2104.1) stored procedure that fills out our web site home page with data. When executed from SSMS it takes abount 0.5 second to finish. Our web site uses ASP.NET Core (Entity Framework) to execute it. Now, the strange part - a...
My problem is a quite hard to define. We have an SQL Server (version 15.0.2104.1) stored procedure that fills out our web site home page with data. When executed from SSMS it takes abount 0.5 second to finish. Our web site uses ASP.NET Core (Entity Framework) to execute it. Now, the strange part - after creating the procedure it runs fine on our web site. But after a while (a few hours) it starts to get timeout exceptions, while still it runs fine from SSMS. If I only resubmit the procedure code (ALTER PROCEDURE) without changing anything in it, the procedures works fine again on our web site. Then after a few hours the problem returens. I have no clue to what cause it, and since I can not reproduce it on SSMS I have no tools to correct it. Thanks.
Eyal Hasson (25 rep)
Feb 6, 2024, 02:44 PM • Last activity: May 2, 2025, 12:04 AM
-1 votes
1 answers
385 views
Remove some auto created indices from Entity Framework?
I'm using Microsoft's Entity Framework. It automatically creates indices for every foreign key, but in many cases, I can see from the index stats that these are never used for reading - they are only ever updated (which doesn't surprise me as I doubt I am ever actually doing a join on many of them)....
I'm using Microsoft's Entity Framework. It automatically creates indices for every foreign key, but in many cases, I can see from the index stats that these are never used for reading - they are only ever updated (which doesn't surprise me as I doubt I am ever actually doing a join on many of them). My gut feeling is that I should remove these indexes as they are just slowing down updates (albeit probably not that significantly) and taking up space. I like to try to keep things clean and lean. Is there a good reason to keep them?
wizzardmr42 (460 rep)
Jul 31, 2016, 05:34 PM • Last activity: Mar 12, 2025, 09:00 AM
0 votes
1 answers
495 views
Can we delete data using SQL job and insert data into the same table at the same time from another application
I have created one SQL job and it is executing a stored procedure to delete data from the table at a specific time. But at the same time, if anyone will try to insert a record in that same table from another application through APIs then I am getting a timeout exception for that request. Error is >...
I have created one SQL job and it is executing a stored procedure to delete data from the table at a specific time. But at the same time, if anyone will try to insert a record in that same table from another application through APIs then I am getting a timeout exception for that request. Error is > Microsoft.Data.SqlClient.SqlException (0x80131904): Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Can I achieve both tasks at the same time without error? Thanks in advance!
Explorer (111 rep)
Jun 10, 2022, 09:16 PM • Last activity: Jan 2, 2025, 03:06 AM
-1 votes
1 answers
2706 views
Use the following business rules to create a Crow’s Foot ERD
[![enter image description here][1]][1] [![enter image description here][2]][2] [1]: https://i.sstatic.net/M3IMd.png [2]: https://i.sstatic.net/ICEy2.png Is there any mistake in the ERD???? it looks weird.
enter image description here enter image description here Is there any mistake in the ERD???? it looks weird.
zz Z (1 rep)
Apr 13, 2020, 01:40 PM • Last activity: Oct 27, 2024, 07:31 PM
0 votes
1 answers
140 views
Why is this PostgreSQL query with LIMIT so slow?
I have a .NET WebApi with two entities (please forgive the silly names, it's just an example): public class Father { public int Id { get; set; } public string Name { get; set; } = null!; public int? ChildAId { get; set; } public int? ChildBId { get; set; } public int? ChildCId { get; set; } public i...
I have a .NET WebApi with two entities (please forgive the silly names, it's just an example): public class Father { public int Id { get; set; } public string Name { get; set; } = null!; public int? ChildAId { get; set; } public int? ChildBId { get; set; } public int? ChildCId { get; set; } public int? ChildDId { get; set; } public int? ChildEId { get; set; } public int? ChildFId { get; set; } public int? ChildGId { get; set; } public int? ChildHId { get; set; } public int? ChildIId { get; set; } public int? ChildJId { get; set; } public int? ChildKId { get; set; } public int? ChildLId { get; set; } public int? ChildMId { get; set; } public int? ChildNId { get; set; } public int? ChildOId { get; set; } public int? ChildPId { get; set; } public int? ChildQId { get; set; } public int? ChildRId { get; set; } public int? ChildSId { get; set; } public int? ChildTId { get; set; } public int? ChildUId { get; set; } public int? ChildVId { get; set; } public int? ChildWId { get; set; } public int? ChildXId { get; set; } public int? ChildYId { get; set; } public int? ChildZId { get; set; } public int? Child0Id { get; set; } public int? Child1Id { get; set; } public int? Child2Id { get; set; } public int? Child3Id { get; set; } public int? Child4Id { get; set; } public int? Child5Id { get; set; } public int? Child6Id { get; set; } public int? Child7Id { get; set; } public int? Child8Id { get; set; } public int? Child9Id { get; set; } public Child? ChildA { get; set; } public Child? ChildB { get; set; } public Child? ChildC { get; set; } public Child? ChildD { get; set; } public Child? ChildE { get; set; } public Child? ChildF { get; set; } public Child? ChildG { get; set; } public Child? ChildH { get; set; } public Child? ChildI { get; set; } public Child? ChildJ { get; set; } public Child? ChildK { get; set; } public Child? ChildL { get; set; } public Child? ChildM { get; set; } public Child? ChildN { get; set; } public Child? ChildO { get; set; } public Child? ChildP { get; set; } public Child? ChildQ { get; set; } public Child? ChildR { get; set; } public Child? ChildS { get; set; } public Child? ChildT { get; set; } public Child? ChildU { get; set; } public Child? ChildV { get; set; } public Child? ChildW { get; set; } public Child? ChildX { get; set; } public Child? ChildY { get; set; } public Child? ChildZ { get; set; } public Child? Child0 { get; set; } public Child? Child1 { get; set; } public Child? Child2 { get; set; } public Child? Child3 { get; set; } public Child? Child4 { get; set; } public Child? Child5 { get; set; } public Child? Child6 { get; set; } public Child? Child7 { get; set; } public Child? Child8 { get; set; } public Child? Child9 { get; set; } } public class Child { public int Id { get; set; } public string Name { get; set; } = null!; public string Description { get; set; } = null!; } I have inserted 100.000 fathers with their children from A to Z populated. Now I'm trying to recover them from the database using EF Core (but PgAdmin gives the same results). 20 FATHERS WITH LIMIT/OFFSET SELECT f."Id", f."Child0Id", f."Child1Id", f."Child2Id", f."Child3Id", f."Child4Id", f."Child5Id", f."Child6Id", f."Child7Id", f."Child8Id", f."Child9Id", f."ChildAId", f."ChildBId", f."ChildCId", f."ChildDId", f."ChildEId", f."ChildFId", f."ChildGId", f."ChildHId", f."ChildIId", f."ChildJId", f."ChildKId", f."ChildLId", f."ChildMId", f."ChildNId", f."ChildOId", f."ChildPId", f."ChildQId", f."ChildRId", f."ChildSId", f."ChildTId", f."ChildUId", f."ChildVId", f."ChildWId", f."ChildXId", f."ChildYId", f."ChildZId", f."Name" FROM "Father" AS f LIMIT 20 OFFSET 98000 Query completes in 130ms 20 FATHERS WITH ALL THEIR CHILDREN AND LIMIT/OFFSET SELECT t."Id", t."Child0Id", t."Child1Id", t."Child2Id", t."Child3Id", t."Child4Id", t."Child5Id", t."Child6Id", t."Child7Id", t."Child8Id", t."Child9Id", t."ChildAId", t."ChildBId", t."ChildCId", t."ChildDId", t."ChildEId", t."ChildFId", t."ChildGId", t."ChildHId", t."ChildIId", t."ChildJId", t."ChildKId", t."ChildLId", t."ChildMId", t."ChildNId", t."ChildOId", t."ChildPId", t."ChildQId", t."ChildRId", t."ChildSId", t."ChildTId", t."ChildUId", t."ChildVId", t."ChildWId", t."ChildXId", t."ChildYId", t."ChildZId", t."Name", c."Id", c."Description", c."Name", c0."Id", c0."Description", c0."Name", c1."Id", c1."Description", c1."Name", c2."Id", c2."Description", c2."Name", c3."Id", c3."Description", c3."Name", c4."Id", c4."Description", c4."Name", c5."Id", c5."Description", c5."Name", c6."Id", c6."Description", c6."Name", c7."Id", c7."Description", c7."Name", c8."Id", c8."Description", c8."Name", c9."Id", c9."Description", c9."Name", c10."Id", c10."Description", c10."Name", c11."Id", c11."Description", c11."Name", c12."Id", c12."Description", c12."Name", c13."Id", c13."Description", c13."Name", c14."Id", c14."Description", c14."Name", c15."Id", c15."Description", c15."Name", c16."Id", c16."Description", c16."Name", c17."Id", c17."Description", c17."Name", c18."Id", c18."Description", c18."Name", c19."Id", c19."Description", c19."Name", c20."Id", c20."Description", c20."Name", c21."Id", c21."Description", c21."Name", c22."Id", c22."Description", c22."Name", c23."Id", c23."Description", c23."Name", c24."Id", c24."Description", c24."Name", c25."Id", c25."Description", c25."Name", c26."Id", c26."Description", c26."Name", c27."Id", c27."Description", c27."Name", c28."Id", c28."Description", c28."Name", c29."Id", c29."Description", c29."Name", c30."Id", c30."Description", c30."Name", c31."Id", c31."Description", c31."Name", c32."Id", c32."Description", c32."Name", c33."Id", c33."Description", c33."Name", c34."Id", c34."Description", c34."Name" FROM ( SELECT f."Id", f."Child0Id", f."Child1Id", f."Child2Id", f."Child3Id", f."Child4Id", f."Child5Id", f."Child6Id", f."Child7Id", f."Child8Id", f."Child9Id", f."ChildAId", f."ChildBId", f."ChildCId", f."ChildDId", f."ChildEId", f."ChildFId", f."ChildGId", f."ChildHId", f."ChildIId", f."ChildJId", f."ChildKId", f."ChildLId", f."ChildMId", f."ChildNId", f."ChildOId", f."ChildPId", f."ChildQId", f."ChildRId", f."ChildSId", f."ChildTId", f."ChildUId", f."ChildVId", f."ChildWId", f."ChildXId", f."ChildYId", f."ChildZId", f."Name" FROM "Father" AS f LIMIT 20 OFFSET 98000 ) AS t LEFT JOIN "Child" AS c ON t."ChildAId" = c."Id" LEFT JOIN "Child" AS c0 ON t."ChildBId" = c0."Id" LEFT JOIN "Child" AS c1 ON t."ChildCId" = c1."Id" LEFT JOIN "Child" AS c2 ON t."ChildDId" = c2."Id" LEFT JOIN "Child" AS c3 ON t."ChildEId" = c3."Id" LEFT JOIN "Child" AS c4 ON t."ChildFId" = c4."Id" LEFT JOIN "Child" AS c5 ON t."ChildGId" = c5."Id" LEFT JOIN "Child" AS c6 ON t."ChildHId" = c6."Id" LEFT JOIN "Child" AS c7 ON t."ChildIId" = c7."Id" LEFT JOIN "Child" AS c8 ON t."ChildJId" = c8."Id" LEFT JOIN "Child" AS c9 ON t."ChildKId" = c9."Id" LEFT JOIN "Child" AS c10 ON t."ChildLId" = c10."Id" LEFT JOIN "Child" AS c11 ON t."ChildMId" = c11."Id" LEFT JOIN "Child" AS c12 ON t."ChildNId" = c12."Id" LEFT JOIN "Child" AS c13 ON t."ChildOId" = c13."Id" LEFT JOIN "Child" AS c14 ON t."ChildPId" = c14."Id" LEFT JOIN "Child" AS c15 ON t."ChildQId" = c15."Id" LEFT JOIN "Child" AS c16 ON t."ChildRId" = c16."Id" LEFT JOIN "Child" AS c17 ON t."ChildSId" = c17."Id" LEFT JOIN "Child" AS c18 ON t."ChildTId" = c18."Id" LEFT JOIN "Child" AS c19 ON t."ChildUId" = c19."Id" LEFT JOIN "Child" AS c20 ON t."ChildVId" = c20."Id" LEFT JOIN "Child" AS c21 ON t."ChildWId" = c21."Id" LEFT JOIN "Child" AS c22 ON t."ChildXId" = c22."Id" LEFT JOIN "Child" AS c23 ON t."ChildYId" = c23."Id" LEFT JOIN "Child" AS c24 ON t."ChildZId" = c24."Id" LEFT JOIN "Child" AS c25 ON t."Child0Id" = c25."Id" LEFT JOIN "Child" AS c26 ON t."Child1Id" = c26."Id" LEFT JOIN "Child" AS c27 ON t."Child2Id" = c27."Id" LEFT JOIN "Child" AS c28 ON t."Child3Id" = c28."Id" LEFT JOIN "Child" AS c29 ON t."Child4Id" = c29."Id" LEFT JOIN "Child" AS c30 ON t."Child5Id" = c30."Id" LEFT JOIN "Child" AS c31 ON t."Child6Id" = c31."Id" LEFT JOIN "Child" AS c32 ON t."Child7Id" = c32."Id" LEFT JOIN "Child" AS c33 ON t."Child8Id" = c33."Id" LEFT JOIN "Child" AS c34 ON t."Child9Id" = c34."Id" Query completes in 300ms SINGLE FATHER BY ID (WITH LIMIT SET BY SINGLEORDEFAULT) SELECT f."Id", f."Child0Id", f."Child1Id", f."Child2Id", f."Child3Id", f."Child4Id", f."Child5Id", f."Child6Id", f."Child7Id", f."Child8Id", f."Child9Id", f."ChildAId", f."ChildBId", f."ChildCId", f."ChildDId", f."ChildEId", f."ChildFId", f."ChildGId", f."ChildHId", f."ChildIId", f."ChildJId", f."ChildKId", f."ChildLId", f."ChildMId", f."ChildNId", f."ChildOId", f."ChildPId", f."ChildQId", f."ChildRId", f."ChildSId", f."ChildTId", f."ChildUId", f."ChildVId", f."ChildWId", f."ChildXId", f."ChildYId", f."ChildZId", f."Name" FROM "Father" AS f WHERE f."Id" = 1 LIMIT 2 Query completes in 115ms NOW COMES THE PROBLEM: SINGLE FATHER WITH ALL CHILDREN BY ID (WITH LIMIT SET BY SINGLEORDEFAULT) SELECT f."Id", f."Child0Id", f."Child1Id", f."Child2Id", f."Child3Id", f."Child4Id", f."Child5Id", f."Child6Id", f."Child7Id", f."Child8Id", f."Child9Id", f."ChildAId", f."ChildBId", f."ChildCId", f."ChildDId", f."ChildEId", f."ChildFId", f."ChildGId", f."ChildHId", f."ChildIId", f."ChildJId", f."ChildKId", f."ChildLId", f."ChildMId", f."ChildNId", f."ChildOId", f."ChildPId", f."ChildQId", f."ChildRId", f."ChildSId", f."ChildTId", f."ChildUId", f."ChildVId", f."ChildWId", f."ChildXId", f."ChildYId", f."ChildZId", f."Name", c."Id", c."Description", c."Name", c0."Id", c0."Description", c0."Name", c1."Id", c1."Description", c1."Name", c2."Id", c2."Description", c2."Name", c3."Id", c3."Description", c3."Name", c4."Id", c4."Description", c4."Name", c5."Id", c5."Description", c5."Name", c6."Id", c6."Description", c6."Name", c7."Id", c7."Description", c7."Name", c8."Id", c8."Description", c8."Name", c9."Id", c9."Description", c9."Name", c10."Id", c10."Description", c10."Name", c11."Id", c11."Description", c11."Name", c12."Id", c12."Description", c12."Name", c13."Id", c13."Description", c13."Name", c14."Id", c14."Description", c14."Name", c15."Id", c15."Description", c15."Name", c16."Id", c16."Description", c16."Name", c17."Id", c17."Description", c17."Name", c18."Id", c18."Description", c18."Name", c19."Id", c19."Description", c19."Name", c20."Id", c20."Description", c20."Name", c21."Id", c21."Description", c21."Name", c22."Id", c22."Description", c22."Name", c23."Id", c23."Description", c23."Name", c24."Id", c24."Description", c24."Name", c25."Id", c25."Description", c25."Name", c26."Id", c26."Description", c26."Name", c27."Id", c27."Description", c27."Name", c28."Id", c28."Description", c28."Name", c29."Id", c29."Description", c29."Name", c30."Id", c30."Description", c30."Name", c31."Id", c31."Description", c31."Name", c32."Id", c32."Description", c32."Name", c33."Id", c33."Description", c33."Name", c34."Id", c34."Description", c34."Name" FROM "Father" AS f LEFT JOIN "Child" AS c ON f."ChildAId" = c."Id" LEFT JOIN "Child" AS c0 ON f."ChildBId" = c0."Id" LEFT JOIN "Child" AS c1 ON f."ChildCId" = c1."Id" LEFT JOIN "Child" AS c2 ON f."ChildDId" = c2."Id" LEFT JOIN "Child" AS c3 ON f."ChildEId" = c3."Id" LEFT JOIN "Child" AS c4 ON f."ChildFId" = c4."Id" LEFT JOIN "Child" AS c5 ON f."ChildGId" = c5."Id" LEFT JOIN "Child" AS c6 ON f."ChildHId" = c6."Id" LEFT JOIN "Child" AS c7 ON f."ChildIId" = c7."Id" LEFT JOIN "Child" AS c8 ON f."ChildJId" = c8."Id" LEFT JOIN "Child" AS c9 ON f."ChildKId" = c9."Id" LEFT JOIN "Child" AS c10 ON f."ChildLId" = c10."Id" LEFT JOIN "Child" AS c11 ON f."ChildMId" = c11."Id" LEFT JOIN "Child" AS c12 ON f."ChildNId" = c12."Id" LEFT JOIN "Child" AS c13 ON f."ChildOId" = c13."Id" LEFT JOIN "Child" AS c14 ON f."ChildPId" = c14."Id" LEFT JOIN "Child" AS c15 ON f."ChildQId" = c15."Id" LEFT JOIN "Child" AS c16 ON f."ChildRId" = c16."Id" LEFT JOIN "Child" AS c17 ON f."ChildSId" = c17."Id" LEFT JOIN "Child" AS c18 ON f."ChildTId" = c18."Id" LEFT JOIN "Child" AS c19 ON f."ChildUId" = c19."Id" LEFT JOIN "Child" AS c20 ON f."ChildVId" = c20."Id" LEFT JOIN "Child" AS c21 ON f."ChildWId" = c21."Id" LEFT JOIN "Child" AS c22 ON f."ChildXId" = c22."Id" LEFT JOIN "Child" AS c23 ON f."ChildYId" = c23."Id" LEFT JOIN "Child" AS c24 ON f."ChildZId" = c24."Id" LEFT JOIN "Child" AS c25 ON f."Child0Id" = c25."Id" LEFT JOIN "Child" AS c26 ON f."Child1Id" = c26."Id" LEFT JOIN "Child" AS c27 ON f."Child2Id" = c27."Id" LEFT JOIN "Child" AS c28 ON f."Child3Id" = c28."Id" LEFT JOIN "Child" AS c29 ON f."Child4Id" = c29."Id" LEFT JOIN "Child" AS c30 ON f."Child5Id" = c30."Id" LEFT JOIN "Child" AS c31 ON f."Child6Id" = c31."Id" LEFT JOIN "Child" AS c32 ON f."Child7Id" = c32."Id" LEFT JOIN "Child" AS c33 ON f."Child8Id" = c33."Id" LEFT JOIN "Child" AS c34 ON f."Child9Id" = c34."Id" WHERE f."Id" = 1 LIMIT 2 Query completes in MORE THAN TWO SECONDS! 2300ms Can someone explain me why and how to optimize this query? I have indexes on all PKs and FKs.
Alessandro (103 rep)
Mar 21, 2024, 11:39 AM • Last activity: Mar 21, 2024, 06:55 PM
0 votes
0 answers
343 views
Insert data into Always Encrypted column using stored procedure in MSSQL
I am facing an issue while trying to insert data into SQL table which is having encrypted columns in it. Following is my approach. I have created a table with three columns: **[ID], [Name], [Password]**. **[Name]** and **[Password]** columns are encrypted. with **COLLATE Latin1_General_BIN2 ENCRYPTE...
I am facing an issue while trying to insert data into SQL table which is having encrypted columns in it. Following is my approach. I have created a table with three columns: **[ID], [Name], [Password]**. **[Name]** and **[Password]** columns are encrypted. with **COLLATE Latin1_General_BIN2 ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [Test_CEK], ENCRYPTION_TYPE = Deterministic, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256')** - I have set "**Column Encryption Setting = Enabled**" - I have set "**Parameterization for Always Encrypted**" And I have created a procedure to insert records into Test table as follows:
USE [TestDB]
GO

CREATE PROCEDURE InsertTest


@name varchar(50),

@pwd varchar(max)

AS

BEGIN

	INSERT INTO [dbo].[Test] ([Name] ,[Password]) VALUES (@name, @pwd)

END 

GO
When I try to insert record using the above procedure I am getting error as shown below. >Msg 206, Level 16, State 2, Procedure InsertTest, Line 0 [Batch Start Line 0] Operand type clash: varchar is incompatible with varchar(1) encrypted with (encryption_type = 'DETERMINISTIC', encryption_algorithm_name = 'AEAD_AES_256_CBC_HMAC_SHA_256', column_encryption_key_name = 'Test_CEK', column_encryption_key_database_name = 'TestDB') collation_name = 'SQL_Latin1_General_CP1_CI_AS' I want to insert data into SQL table through stored procedure. Do not want to use ADO.Net for SQL connection. Want to implement the same using EntityFramework.
Prathap r (1 rep)
Jan 22, 2024, 09:34 AM • Last activity: Jan 22, 2024, 11:49 AM
13 votes
2 answers
32348 views
Why do async_network_io wait types occur?
Last week something strange happened on our database. All of a sudden, the application blocked for our users who were not able to save new entities etc. After looking at the Activity Monitor of the SQL Server (2008 with compatibility mode 2005) I saw the following three entries: ![async_network_io w...
Last week something strange happened on our database. All of a sudden, the application blocked for our users who were not able to save new entities etc. After looking at the Activity Monitor of the SQL Server (2008 with compatibility mode 2005) I saw the following three entries: async_network_io wait types After some time, the users got a connection timeout. When I killed the process 64, they could save normally again. The problem is that the entities they tried to save during the block were inserted into the DB more than once (up to 3 times) even though there is code which should prevent this from happening (number column which has to be unique but without a constraint... the check happens in the code). We use Entity Framework 6.0. - Does anyone of you know why and when these ASYNC_NETWORK_IO wait types occur and how to avoid them? - And what exactly do they mean?
xeraphim (597 rep)
Jul 20, 2015, 08:54 AM • Last activity: Oct 3, 2023, 11:03 AM
1 votes
2 answers
950 views
Is high logical read terrible for performance? AFAIK this is reading from memory, which is fast
I have database query that seems to have abnormally high logical read (refer below). I am not a DBA, but from the plan it seems pretty optimized to me? There are 2 index scans but the cost isn't that bad IMHO, just **14%** and **4%** respectively. I am **more interested in addressing the seemingly a...
I have database query that seems to have abnormally high logical read (refer below). I am not a DBA, but from the plan it seems pretty optimized to me? There are 2 index scans but the cost isn't that bad IMHO, just **14%** and **4%** respectively. I am **more interested in addressing the seemingly abnormally high logical reads** (relative to the other queries). **The execution plan for this query**, pasted into Paste The Plan: https://www.brentozar.com/pastetheplan/?id=BJj8s6L1a high logical read This is the query generated by Entity Framework.
-sql 
/*
This query text was retrieved from showplan XML, and may be truncated.
*/

SELECT [a].[Id], [a].[UserName], [a].[Email], [a].[PhoneNumber], [a].[FullName], [a].[StaffNo], [a].[Division], [a].[Department], [a].[Section], [a].[ModifiedDate], [a].[UserType], [a].[UserRegistrationStatus], [a].[Company], [a].[Remark]
FROM [AspNetUsers] AS [a]
WHERE ((([a].[IsActive] = CAST(1 AS bit)) AND ([a].[IsDeleted] = CAST(0 AS bit))) AND EXISTS (
    SELECT 1
    FROM [AspNetUserClaims] AS [a0]
    WHERE ([a].[Id] = [a0].[UserId]) AND (([a0].[ClaimType] = 'tenantId') AND ([a0].[ClaimValue] = @__currentTenantId_0)))) AND EXISTS (
    SELECT 1
    FROM [UserAuthorizedArea] AS [u]
    INNER JOIN [AspNetRoles] AS [a1] ON [u].[RoleId] = [a1].[Id]
    WHERE ((([u].[IsDeleted] = CAST(0 AS bit)) AND ([u].[TenantId] = @__ef_filter__CurrentTenantId_0)) AND ([a].[Id] = [u].[UserId])) AND ([a1].[RoleType] = @__request_SearchUserModel_RoleName_1))
ORDER BY [a].[Id]
OFFSET @__p_2 ROWS FETCH NEXT @__p_3 ROWS ONLY
Any idea **how this query can be optimized further** to reduce the logical read? **Number of executions** : 1,883 **Total durations (ms)** : 809,714 ms **Average query duration** : 809,714 / 1,883 = 430 ms More stats
Rosdi Kasim (111 rep)
Sep 19, 2023, 03:12 AM • Last activity: Sep 19, 2023, 01:43 PM
0 votes
1 answers
307 views
EF Queries Using Read Commited Though snapshot_isolation_state and is_read_committed_snapshot_on are ON
I am running this query, SELECT name , snapshot_isolation_state , is_read_committed_snapshot_on FROM sys.databases s name snapshot_isolation_state is_read_committed_snapshot_on master 1 0 tempdb 0 0 model 0 0 msdb 1 0 MyDB1 1 1 MyDB2 1 1 MyDB1 and MyDB2 is enabled snapshot but still all my Entity Fr...
I am running this query, SELECT name , snapshot_isolation_state , is_read_committed_snapshot_on FROM sys.databases s name snapshot_isolation_state is_read_committed_snapshot_on master 1 0 tempdb 0 0 model 0 0 msdb 1 0 MyDB1 1 1 MyDB2 1 1 MyDB1 and MyDB2 is enabled snapshot but still all my Entity Framework queries are using, select transaction_isolation_level from sys.dm_exec_requests All sessions values are transaction_isolation_level=2 which means Read Committed. What can I do?
Imran Qadir Baksh - Baloch (1319 rep)
Aug 10, 2023, 06:44 AM • Last activity: Aug 10, 2023, 11:58 AM
0 votes
1 answers
146 views
Maintain changes in database via generic approach triggers into history table
Our main system database has temporal tables These have helped us hugely in working out issues However. A drawback of this is that an issue with doing a lot of updates at once can lead to a an error about not being able to update the temporal table due to the time being earlier than the period start...
Our main system database has temporal tables These have helped us hugely in working out issues However. A drawback of this is that an issue with doing a lot of updates at once can lead to a an error about not being able to update the temporal table due to the time being earlier than the period start time for the affected records I get this error when updating the database through entity framework core I need a solution where I can get a temporal table like audit but any issues do not ever get back to the system and stop a transaction from happening Can anyone suggest a solution? Has anyone tried a trigger based fix? This is using azure Sql In a perfect world our replica database would get the changes and then write to a temporal table but I’m assuming that’s not an option as I would assume that the whole structure of the source and replicated database have to be identical Paul
PaulInUk2017 (1 rep)
Jul 12, 2023, 11:36 PM • Last activity: Jul 13, 2023, 01:48 AM
3 votes
1 answers
1320 views
Debugging slow SQL Server performance with SQL Profiler
I have been debugging an issue in our production environment(on-premises) where queries perform slowly even the columns are indexed. The table has roughly 200 million rows, is not partitioned and is in full recovery mode. The DELETEs only affects 1 row and INSERTs are only 1 row too. I can see the d...
I have been debugging an issue in our production environment(on-premises) where queries perform slowly even the columns are indexed. The table has roughly 200 million rows, is not partitioned and is in full recovery mode. The DELETEs only affects 1 row and INSERTs are only 1 row too. I can see the duration in SQL Profiler, but it occurs during Audit Logout which I don't understand. For example the DELETE on top of the picture seems to take 0ms to run, but the Audit Logout takes 2867ms and it also has huge amount of reads. I have monitored the DB with PerfMon, but cannot see any related I/O or CPU issues. Does it make sense that DELETE or INSERT with such a table could take seconds? Would anyone have an idea what is causing the latency? SQL Profiler
Joni (33 rep)
Apr 2, 2023, 07:35 AM • Last activity: Apr 2, 2023, 01:03 PM
2 votes
1 answers
211 views
Possible overhead of adding columns in a table without index in SQL Server
My question somehow relates to this one https://dba.stackexchange.com/questions/315358/performance-considerations-when-adding-a-new-column-to-a-big-table-in-sql-server, but i still think there are a couple of differences so i'll ask anyways. We've got a relatively large table (20m records, ~20 colum...
My question somehow relates to this one https://dba.stackexchange.com/questions/315358/performance-considerations-when-adding-a-new-column-to-a-big-table-in-sql-server , but i still think there are a couple of differences so i'll ask anyways. We've got a relatively large table (20m records, ~20 columns), which is the most frequently used table in our DB. We've got an index on that table which, judging by its name, was proposed to us by Azure advisor thing (or whatever it's called) and it has all the columns in this table as "included columns". Now we need to add 2 more columns in this table. It is not planned to use them in the results we get in our most frequent query (that's the one our god-index is for). These columns will be nullable and contain 2 additional IDs which could be used to uniquely identify a record in this table. I would like to know whether our ideas are correct or not. Just for reference we use Azure SQL server and our DB is on the P2 tier. 1. Does adding a NULL-able column in such a big table cause a noticable DB-load spike by itself? My guess it shouldn't. 2. Does making an index on such a column cause a noticable DB-load spike? My guess it shouldn't do that either. 3. Would including these 2 to our god-index might cause a DB-load spike? I guess it would because this will make index to rebuild? 4. Since we are using the Entity Framework which automatically queries all the properties of the entity, would it decrease the performance if we do not include these 2 columns on the index but they will be queried as a part of our "the most frequent" query? 5. Might the creation of a new 1-to-1 table be considered as a good alternative which would allow us to keep existing things as they are and use the new columns only for cases where we need them?
Andrey Stukalin (121 rep)
Feb 16, 2023, 03:17 PM • Last activity: Feb 18, 2023, 09:37 AM
1 votes
1 answers
790 views
Adding soft delete to a database after having used hard delete
**Introduction** My app collects data from a centralized source where many different users can submit data about their organisation and their staff. Previously we used to just hard delete a users data when they were no longer relevant from the source of truth because it used to be reliable. But a ch...
**Introduction** My app collects data from a centralized source where many different users can submit data about their organisation and their staff. Previously we used to just hard delete a users data when they were no longer relevant from the source of truth because it used to be reliable. But a change to some software the clients use, messes with everything. They now DELETE all their data multiple times per month when they submit data. This is by mistake and due to a terrible design. Which means they lose the data for the users in our system and have to re-enter parts of it. The software they use are stubborn and won't change the behaviour. We have tried educating the users about how to use it, but they don't learn. So now the last option is to soft delete the data for a certain time period. Having looked at multiple Stack Overflow posts and blogs around the web, I don't really fancy any of the options, IE. add a column to the tables that need to be soft deleted. I started looking because that was my first instinct as well but don't really like it and the implications. I was wondering if you could give me some feedback on a different idea. I have no experience with maintaining soft deletion and I don't know if my thought is terrible. **Diagram and relations** Simple diagram to show some of the relations There is a user, their unique identifier is the same across multiple orgs. Per user affiliation with an org they have some userinformation like name, title etc. In our system they have one status row because it is the same in our app no matter what org they choose to connect as. So if I follow the conventional way, of adding columns for soft deleting I would have to add one to each of the unique tables that contains user data, because their affiliation to a certain org might be deleted but as a user they still live on in our system from somewhere else. But it seems like a hassle and a lot of change in the nitty gritty of my code to change things around to account for all these extra columns. **Idea** In my mind it would be simpler if I added a separate table containing the following: - UniqueUserIdentifier - UniqueOrgIdentifier - SoftDeleteDate And then whenever my app ask for data the api checks the new table; "is this person soft deleted from this org?" If true, they just block the request until they are restored if needed, or they will remain deleted until they are hard deleted within x hours of the soft deletion happening. Instead of having to change many queries and logic all over the place. **Additional information** The API uses EFCore as an ORM to connect to the database, in case that would help with any other smart fixes regarding its feature set. I have thought about creating custom savechanges logic, but couldn't come up with a good idea other than again adding a column to all the tables. Please let me know if you need any more information. **Update** J.D. Told me about row-level security which made me look around. It seems very useful, and it gave me some more insight into what I could search for. So I came across global query filters for EFCore which seems promising. It allows the context to filter on all queries and when you actually need to ignore this global filter, you can simply do it on a query by query basis. And it allows for dependency injection if you need to use something for the global filter that is based on the user that is connected. I created an answer based on this new information It also turns out that what I really wanted was to deactivate the row until eventual activation or hard delete instead of soft delete. I didn't know the correct way to express myself.
Mikkel (21 rep)
Feb 3, 2022, 01:49 PM • Last activity: Nov 24, 2022, 04:53 AM
0 votes
2 answers
586 views
What if we do not use MAX function
I was supposed to use a script to get the MAX value from a table. It should have looked like this ``` BEGIN DECLARE @MaxID AS INT SELECT @MaxID = MAX([Id]) FROM dbo.suggestion Insert into suggestion values(@MaxID+ 1, 'value', 1); END ``` Instead, due to some error, it went like this ``` BEGIN DECLAR...
I was supposed to use a script to get the MAX value from a table. It should have looked like this
BEGIN
DECLARE @MaxID AS INT
SELECT @MaxID = MAX([Id]) FROM dbo.suggestion
Insert into suggestion
values(@MaxID+ 1, 'value', 1);
END
Instead, due to some error, it went like this
BEGIN
DECLARE @MaxID AS INT
SELECT @MaxID = [Id] FROM dbo.suggestion
Insert into suggestion
values(@MaxID+ 1, 'value', 1);
END
It was working fine till date, due to some other issue when we started drilling down, then it came to light. I wonder how it worked till date. If somebody share their knowledge, it would be highly appreciated TIA
ispostback (223 rep)
Nov 9, 2022, 06:39 AM • Last activity: Nov 10, 2022, 12:12 PM
4 votes
1 answers
4264 views
HOLDLOCK required in MERGE with UPDATE
Within a .NET 6 application the EntityFramework is used to keep track of entities and to update related fields. To improve bulk updates, the EfCore.BulkExtensions package has been added. During a .BulkUpdate a SQL statement is generated and executed (by the described package). I noticed the `WITH (H...
Within a .NET 6 application the EntityFramework is used to keep track of entities and to update related fields. To improve bulk updates, the EfCore.BulkExtensions package has been added. During a .BulkUpdate a SQL statement is generated and executed (by the described package). I noticed the WITH (HOLDLOCK) part. I've read some documentation about the hint and possible race conditions, but do they occur in a MERGE with only the UPDATE command? In short, is this WITH (HOLDLOCK) hint necessary in case the MERGE statement only includes an UPDATE command or can it safely be removed? Sample query: MERGE TargetProducts WITH (HOLDLOCK) AS Target USING SourceProducts AS Source ON Source.ProductID = Target.ProductID WHEN MATCHED THEN UPDATE SET Target.ProductName = Source.ProductName, Target.Price = Source.Price;
Odrai (153 rep)
Sep 27, 2022, 06:45 PM • Last activity: Sep 28, 2022, 10:33 AM
2 votes
3 answers
680 views
Database Design - Relationship between two or more junctions tables
I am a junior developer and I don't know how to implement this next relationship. > **Note: All entities described here are fictional but have equivalents in an actual production Database. Performance is irrelevant (OK with too many junctions tables) but would be nice to optimize the schema.** >**Ad...
I am a junior developer and I don't know how to implement this next relationship. > **Note: All entities described here are fictional but have equivalents in an actual production Database. Performance is irrelevant (OK with too many junctions tables) but would be nice to optimize the schema.** >**Additional Information: Using Entity Framework Code First to generate the tables on SQL Server.** **Context** ----------- I have an entity named People, another named HomeAddress and finally one named Job. They have a many-to-many relationship between each other such as People Job and Job HomeAddress. (Junction table have an underscore in their name). This will need to also be replicated with other tables, including one for PhoneNumber. enter image description here **Problem** ----------- How do I link People and Address given this situation. **The issue is some Jobs have an Address while others don't**. I need a relationship between People and Job and also ***specify*** an Address if available (all visual checks will be done on client side). - Example 1: Alice has a Job1 without an Address. Data is inserted into Database without specifying an Address. - Example 2: Bob has a Job2 with 2 Addresses. Bob chooses 1 Address from the list. Data is inserted into Database specifying 1 Address for Job2. - Example 3: Charlie has a Job3 with 4 Addresses. Charlie chooses all 4 of them. Data is inserted specifying all 4 Addresses for Job3. I don't know how to store those informations given this schema since Address should be dependant on Job and if Job is removed from People, so should Address too. **Tried Solutions** ------------------- - I tried to have a link between Job_People and HomeAddress, but since Job can have that information, this seems redundant. - Another solution I can think of is to also have an additional junction table between People and HomeAddress like so People HomeAddress. I just don't know if that is the right decision. > **Again, all entities names are irrelevant, but their relationship is what I am looking for.** Any help is appreciated.
Codingwiz (123 rep)
Jul 21, 2022, 09:18 PM • Last activity: Jul 28, 2022, 04:43 PM
-1 votes
1 answers
625 views
MS SQL Server Express alternatives, choosing between Relational-DB / NoSQL
This question relates to choosing the best database storing data on a pc running windows. The pc is part of a commercial product which generates a big amount of data over time. Data consists of - Test sequences - Test results - Configuration data ---------- The connection between the software and th...
This question relates to choosing the best database storing data on a pc running windows. The pc is part of a commercial product which generates a big amount of data over time. Data consists of - Test sequences - Test results - Configuration data ---------- The connection between the software and the database is done via EntityFrameworkCore. There is already an existing relational database model which has been normalized and optimized for usage in an relational usage. Querying large amounts of data will be essential. It consists of ~25 tables that are interconnected. We chose **MS SQL Server Express** in the past for those local installation. ---------- **Meanwhile the amount of data generated by those devices (test results) grew in a way that we will probably reach the 10GB limit faster than expected. As there are much more curve/chart data which should be saved as well, the 10GB will probably be reached.** As of my knowledge there are no good practices how to handle the data after the Limit is reached. The only way would be to export/archive old data to free up space again. But querying these archived/old backups wont be easy. Right now we are consider switching to another RDBMS for our commercial product like: - PostgreSQL - MariaDB - MySQL (license seems to be needed https://www.mysql.com/about/legal/licensing/oem/) Our aim is to use the existing DB-model for larger analyses of the data later on like trends, statistics and so on. The structure of the different results is very similar to make comparison/querying easier later on.Those queries to generate trends etc. will span over a large number of rows. Queries will include JOINs over multiple tables of the stored data. Our preference here was always a **relational-DB** over a **NoSQL-Database**. This requirement seems to be not optimal to consider a **NoSQL-Database** for the way how we want to store and query the data of these Testresults. But we also looked into **MongoDB**, which makes storing of data very easy in the first place. But there also is the need from customer side to query that data later on (statistics/trends etc.) - Which of the RDBMS would you suggest and why? - How good is MongoDB in such a use case to query large amounts of testresults?
Basti (177 rep)
Jul 28, 2022, 09:50 AM • Last activity: Jul 28, 2022, 12:39 PM
1 votes
2 answers
3927 views
Microsoft SQL Server how can i detach localdb to run on any computer
I am working on a personal project and have implemented a database system using the Entity Framework code first tutorial on msdn, link below, http://msdn.microsoft.com/en-gb/data/jj193542 I am running Visual Studio 2012 and the database has been created using Microsoft SQL Server and localdb. Howeve...
I am working on a personal project and have implemented a database system using the Entity Framework code first tutorial on msdn, link below, http://msdn.microsoft.com/en-gb/data/jj193542 I am running Visual Studio 2012 and the database has been created using Microsoft SQL Server and localdb. However i wish to run my application on any computer and this current setup will not allow me to do so. I am a bit of a novice when it comes to databases, and would like some advice/resources that outline how i can change my database setup so that it is available on any computer and also retain the data currently in my database. Any help would be greatly appreciated, Thanks in advance!
KingSpread (13 rep)
Apr 9, 2013, 03:37 PM • Last activity: Feb 23, 2022, 09:02 AM
2 votes
2 answers
847 views
What is most efficient design pattern for a sql database containing billions of rows per user in a single table?
I work on a relatively large system where have started to run into performance problems scaling for multiple users. The system is a .NET application, so query's are written using an ORM (entity framework), and the database is an Azure SQL database. I'm a developer and not a DBA; Typically when we've...
I work on a relatively large system where have started to run into performance problems scaling for multiple users. The system is a .NET application, so query's are written using an ORM (entity framework), and the database is an Azure SQL database. I'm a developer and not a DBA; Typically when we've hit performance limits, and have optimised our queries to the best of our ability, but if we are still throttling the database, I scale up to a higher tier to increase our DTUs and the problem is solved. We're now at a point where it would be cheaper to give individual users their own database, rather than scale any further. I wont go into the details of what we do, but essentially we have a constant stream of data being sent from our users which on **average** is writing about 100,000 rows of data per user, per day, to the same table. Our users need quick access to this data, which typically involves loading in one month to a year of data at a time. My question is - In this scenario, what options do I have to maintain our performance. As far as I can tell, my only options are: 1 - Generate each user their own table within the database (if that's even possible), so I only need to deal with a few billion rows per user when querying (35b per year). 2 - I generate each user their own database (which should help with the performance hit from concurrent queries, but would be a nightmare to manage) 3 - I just keep throwing more money at azure until it becomes technically impossible to scale any further? Thanks.
Verno (23 rep)
Feb 9, 2022, 12:10 PM • Last activity: Feb 10, 2022, 04:42 PM
Showing page 1 of 20 total questions