Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
0
answers
29
views
Table design for user sessions: highly-frequent selecting/updating of rows, that are subsequently not often touched
I have a use-case where we are required to keep track of certain metrics within user sessions. Now, these metrics/stats update very often with each action/event that occurs (hundreds of times in several minutes per active user). My current setup is simple: SQL Server table with the metric fields and...
I have a use-case where we are required to keep track of certain metrics within user sessions. Now, these metrics/stats update very often with each action/event that occurs (hundreds of times in several minutes per active user).
My current setup is simple: SQL Server table with the metric fields and an expiry datetime field. With every update done to the row, this expiry date is also updated to 5 minutes in the future. As longs as the expiry date is in the future, the current session is updated, otherwise a new one is created.
I have an index on the userId, some other related id column and on the expiry date column.
But as soon as load on the application increases, some queries (both selects and updates) on this table are really slow (10 seconds) and slow everything down to the point the application crashes.
I am assuming this occurs due to the large number of selects and updates on an ever-increasing field (expiry datetime) that is indexed? I still have to confirm this hypothesis.
Is anything fundamentally wrong with my table architecture? How would you approach something like this? Could in an intermediary table fix the problem?
Thanks a lot in advance.
Senne
(1 rep)
Nov 12, 2022, 05:00 PM
1
votes
1
answers
463
views
SQL Server and NHibernate - blocking occurs between two unrelated queries (distributed transactions)
there's a problem that I'm seeking your help with... There are two simple queries: 1. `DELETE` a row from a table in a databaseA 2. `SELECT` a row from a table in a databaseB using hints `with (updlock, rowlock)` Query 1 blocks query 2 The Blocked Process Report shows the following: SELECT stuff fro...
there's a problem that I'm seeking your help with...
There are two simple queries:
1.
DELETE
a row from a table in a databaseA
2. SELECT
a row from a table in a databaseB using hints with (updlock, rowlock)
Query 1 blocks query 2
The Blocked Process Report shows the following:
SELECT stuff from whatever1.table1 WHERE bunch_of_things with (updlock, rowlock)
DELETE from whatever2.table2 WHERE bunch_of_things
Looks like the blocked query is a distributed transaction (DTCXact), right?
A query to sys.dm_tran_database_transactions shows that:
- the blocked query (the SELECT) is associated with just databaseB
- the blocking query (the DELETE) is associated with 3 databases:
databaseA, **databaseB** and databaseC - wouldn't that mean a distributed transaction as well?
The queries come from NHibernate.
Would you please help me find the root cause of this blocking and how to solve it? Can it be related to misconfiguration in NHibernate? Connection pooling issue?
I'm out of ideas and I don't know much about NHibernate. I'm thinking of dropping the updlock
hint and maybe turning on Snapshot Isolation, but if it's the NHibernate's fault, then I'd rather have that corrected.
Thanks in advance...
Timbalero
(111 rep)
Dec 17, 2021, 12:59 AM
• Last activity: Aug 10, 2022, 09:04 AM
0
votes
0
answers
278
views
NHibernate second level cache collection hydration takes too long
I have enabled query caching and entity caching in NHibernate to store large collections of objects, needed for repetitive operations. While I have minimized the number of hits to db, the performance benefit is suboptimal. Loading entities from second level cache takes way too long. Does anyone know...
I have enabled query caching and entity caching in NHibernate to store large collections of objects, needed for repetitive operations. While I have minimized the number of hits to db, the performance benefit is suboptimal.
Loading entities from second level cache takes way too long. Does anyone know how to optimize this?


Byt3
(125 rep)
Dec 14, 2018, 05:45 PM
31
votes
5
answers
4676
views
SQL Deadlock on the same exclusively locked clustered Key (with NHibernate) on delete / insert
I have been working on this deadlock issue for quite a few days now and no matter what I do, it persists in one way or another. First, the general premise: We have Visits with VisitItems in a one to many relationship. VisitItems relevant info: CREATE TABLE [BAR].[VisitItems] ( [Id] INT IDENTITY (1,...
I have been working on this deadlock issue for quite a few days now and no matter what I do, it persists in one way or another.
First, the general premise:
We have Visits with VisitItems in a one to many relationship.
VisitItems relevant info:
CREATE TABLE [BAR].[VisitItems] (
[Id] INT IDENTITY (1, 1) NOT NULL,
[VisitType] INT NOT NULL,
[FeeRateType] INT NOT NULL,
[Amount] DECIMAL (18, 2) NOT NULL,
[GST] DECIMAL (18, 2) NOT NULL,
[Quantity] INT NOT NULL,
[Total] DECIMAL (18, 2) NOT NULL,
[ServiceFeeType] INT NOT NULL,
[ServiceText] NVARCHAR (200) NULL,
[InvoicingProviderId] INT NULL,
[FeeItemId] INT NOT NULL,
[VisitId] INT NULL,
[IsDefault] BIT NOT NULL DEFAULT 0,
[SourceVisitItemId] INT NULL,
[OverrideCode] INT NOT NULL DEFAULT 0,
[InvoiceToCentre] BIT NOT NULL DEFAULT 0,
[IsSurchargeItem] BIT NOT NULL DEFAULT 0,
CONSTRAINT [PK_BAR.VisitItems] PRIMARY KEY CLUSTERED ([Id] ASC),
CONSTRAINT [FK_BAR.VisitItems_BAR.FeeItems_FeeItem_Id] FOREIGN KEY ([FeeItemId]) REFERENCES [BAR].[FeeItems] ([Id]),
CONSTRAINT [FK_BAR.VisitItems_BAR.Visits_Visit_Id] FOREIGN KEY ([VisitId]) REFERENCES [BAR].[Visits] ([Id]),
CONSTRAINT [FK_BAR.VisitItems_BAR.VisitTypes] FOREIGN KEY ([VisitType]) REFERENCES [BAR].[VisitTypes]([Id]),
CONSTRAINT [FK_BAR.VisitItems_BAR.FeeRateTypes] FOREIGN KEY ([FeeRateType]) REFERENCES [BAR].[FeeRateTypes]([Id]),
CONSTRAINT [FK_BAR.VisitItems_CMN.Users_Id] FOREIGN KEY (InvoicingProviderId) REFERENCES [CMN].[Users] ([Id]),
CONSTRAINT [FK_BAR.VisitItems_BAR.VisitItems_SourceVisitItem_Id] FOREIGN KEY ([SourceVisitItemId]) REFERENCES [BAR].[VisitItems]([Id]),
CONSTRAINT [CK_SourceVisitItemId_Not_Equal_Id] CHECK ([SourceVisitItemId] [Id]),
CONSTRAINT [FK_BAR.VisitItems_BAR.OverrideCodes] FOREIGN KEY ([OverrideCode]) REFERENCES [BAR].[OverrideCodes]([Id]),
CONSTRAINT [FK_BAR.VisitItems_BAR.ServiceFeeTypes] FOREIGN KEY ([ServiceFeeType]) REFERENCES [BAR].[ServiceFeeTypes]([Id])
)
CREATE NONCLUSTERED INDEX [IX_FeeItem_Id]
ON [BAR].[VisitItems]([FeeItemId] ASC)
CREATE NONCLUSTERED INDEX [IX_Visit_Id]
ON [BAR].[VisitItems]([VisitId] ASC)
Visit info:
CREATE TABLE [BAR].[Visits] (
[Id] INT IDENTITY (1, 1) NOT NULL,
[VisitType] INT NOT NULL,
[DateOfService] DATETIMEOFFSET NOT NULL,
[InvoiceAnnotation] NVARCHAR(255) NULL ,
[PatientId] INT NOT NULL,
[UserId] INT NULL,
[WorkAreaId] INT NOT NULL,
[DefaultItemOverride] BIT NOT NULL DEFAULT 0,
[DidNotWaitAdjustmentId] INT NULL,
[AppointmentId] INT NULL,
CONSTRAINT [PK_BAR.Visits] PRIMARY KEY CLUSTERED ([Id] ASC),
CONSTRAINT [FK_BAR.Visits_CMN.Patients] FOREIGN KEY ([PatientId]) REFERENCES [CMN].[Patients] ([Id]) ON DELETE CASCADE,
CONSTRAINT [FK_BAR.Visits_CMN.Users] FOREIGN KEY ([UserId]) REFERENCES [CMN].[Users] ([Id]),
CONSTRAINT [FK_BAR.Visits_CMN.WorkAreas_WorkAreaId] FOREIGN KEY ([WorkAreaId]) REFERENCES [CMN].[WorkAreas] ([Id]),
CONSTRAINT [FK_BAR.Visits_BAR.VisitTypes] FOREIGN KEY ([VisitType]) REFERENCES [BAR].[VisitTypes]([Id]),
CONSTRAINT [FK_BAR.Visits_BAR.Adjustments] FOREIGN KEY ([DidNotWaitAdjustmentId]) REFERENCES [BAR].[Adjustments]([Id]),
);
CREATE NONCLUSTERED INDEX [IX_Visits_PatientId]
ON [BAR].[Visits]([PatientId] ASC);
CREATE NONCLUSTERED INDEX [IX_Visits_UserId]
ON [BAR].[Visits]([UserId] ASC);
CREATE NONCLUSTERED INDEX [IX_Visits_WorkAreaId]
ON [BAR].[Visits]([WorkAreaId]);
Multiple users want to update the VisitItems table concurrently in the following way:
A separate web request will create a Visit with VisitItems (usually 1).
Then (the problem request):
1. Web request comes in, opens NHibernate session, starts NHibernate transaction (using Repeatable Read with READ_COMMITTED_SNAPSHOT on).
2. Read all the visit items for a given visit by **VisitId**.
3. Code assesses if the items are still relevant or if we need new ones using complex rules (so slightly long running, e.g. 40ms).
4. Code finds 1 item needs to be added, adds it using NHibernate Visit.VisitItems.Add(..)
5. Code identifies that one item needs to be deleted (not the one we just added), removes it using NHibernate Visit.VisitItems.Remove(item).
6. Code commits the transaction
With a tool I simulate 12 concurrent requests which is quite likely to happen in a future production environment.
**[EDIT]** Upon request, removed a lot of the investigation details I had added here to keep it short.
After lots of research the next step was to think of a way how I can lock hint on a different index to the one used in the where clause (i.e. the primary key, since that's used for deletion), so I altered my lock statement to:
var items = (List)_session.CreateSQLQuery(@"SELECT * FROM BAR.VisitItems WITH (XLOCK, INDEX([PK_BAR.VisitItems]))
WHERE VisitId = :visitId")
.AddEntity(typeof(VisitItem))
.SetParameter("visitId", qi.Visit.Id)
.List();
This reduced the deadlocks in frequency slightly, but they were still happening. And here is where I'm starting to get lost:
unknown
unknown
(@p0 int)SELECT * FROM BAR.VisitItems WITH (XLOCK, INDEX([PK_BAR.VisitItems]))
WHERE VisitId = @p0
unknown
unknown
(@p0 int)DELETE FROM BAR.VisitItems WHERE Id = @p0
A trace of the resulting number of queries looks like this.
**[EDIT]** Whoa. What a week. I have now updated the trace with the unredacted trace of the relevant statement that I think lead to the deadlock.
exec sp_executesql N'SELECT * FROM BAR.VisitItems WITH (XLOCK, INDEX([PK_BAR.VisitItems]))
WHERE VisitId = @p0',N'@p0 int',@p0=3826
go
exec sp_executesql N'SELECT visititems0_.VisitId as VisitId1_, visititems0_.Id as Id1_, visititems0_.Id as Id37_0_, visititems0_.VisitType as VisitType37_0_, visititems0_.FeeItemId as FeeItemId37_0_, visititems0_.FeeRateType as FeeRateT4_37_0_, visititems0_.Amount as Amount37_0_, visititems0_.GST as GST37_0_, visititems0_.Quantity as Quantity37_0_, visititems0_.Total as Total37_0_, visititems0_.ServiceFeeType as ServiceF9_37_0_, visititems0_.ServiceText as Service10_37_0_, visititems0_.InvoiceToCentre as Invoice11_37_0_, visititems0_.IsDefault as IsDefault37_0_, visititems0_.OverrideCode as Overrid13_37_0_, visititems0_.IsSurchargeItem as IsSurch14_37_0_, visititems0_.VisitId as VisitId37_0_, visititems0_.InvoicingProviderId as Invoici16_37_0_, visititems0_.SourceVisitItemId as SourceV17_37_0_ FROM BAR.VisitItems visititems0_ WHERE visititems0_.VisitId=@p0',N'@p0 int',@p0=3826
go
exec sp_executesql N'INSERT INTO BAR.VisitItems (VisitType, FeeItemId, FeeRateType, Amount, GST, Quantity, Total, ServiceFeeType, ServiceText, InvoiceToCentre, IsDefault, OverrideCode, IsSurchargeItem, VisitId, InvoicingProviderId, SourceVisitItemId) VALUES (@p0, @p1, @p2, @p3, @p4, @p5, @p6, @p7, @p8, @p9, @p10, @p11, @p12, @p13, @p14, @p15); select SCOPE_IDENTITY()',N'@p0 int,@p1 int,@p2 int,@p3 decimal(28,5),@p4 decimal(28,5),@p5 int,@p6 decimal(28,5),@p7 int,@p8 nvarchar(4000),@p9 bit,@p10 bit,@p11 int,@p12 bit,@p13 int,@p14 int,@p15 int',@p0=1,@p1=452,@p2=1,@p3=0,@p4=0,@p5=1,@p6=0,@p7=1,@p8=NULL,@p9=0,@p10=1,@p11=0,@p12=0,@p13=3826,@p14=3535,@p15=NULL
go
exec sp_executesql N'UPDATE BAR.Visits SET VisitType = @p0, DateOfService = @p1, InvoiceAnnotation = @p2, DefaultItemOverride = @p3, AppointmentId = @p4, ReferralRequired = @p5, ReferralCarePlan = @p6, UserId = @p7, PatientId = @p8, WorkAreaId = @p9, DidNotWaitAdjustmentId = @p10, ReferralId = @p11 WHERE Id = @p12',N'@p0 int,@p1 datetimeoffset(7),@p2 nvarchar(4000),@p3 bit,@p4 int,@p5 bit,@p6 nvarchar(4000),@p7 int,@p8 int,@p9 int,@p10 int,@p11 int,@p12 int',@p0=1,@p1='2016-01-22 12:37:06.8915296 +08:00',@p2=NULL,@p3=0,@p4=NULL,@p5=0,@p6=NULL,@p7=3535,@p8=4246,@p9=2741,@p10=NULL,@p11=NULL,@p12=3826
go
exec sp_executesql N'DELETE FROM BAR.VisitItems WHERE Id = @p0',N'@p0 int',@p0=7919
go
Now my lock seems to have an effect since it's showing in the deadlock graph.
But what? Three exclusive locks and one shared lock? How does that work on the same object / key? I thought as long as you have an exclusive lock, you can't get a shared lock from somebody else? And the other way around. If you have a shared lock, nobody can gain exclusive lock, they have to wait.
I think I am lacking some deeper understanding here on how the locks work when they are taken on multiple keys on the same table.
Here are some of the things I have tried and their impact:
- Added another index hint on IX_Visit_Id to the lock statement. No
change
- Added a second column to the IX_Visit_Id (the Id of the
VisitItem column); far fetched, but tried anyway. No change
- Changed Isolation level back to read committed (default in our project),
deadlocks still happening
- Changed Isolation level to serializable.
Deadlocks still happening, but worse (different graphs). I don't
really want to do that, anyway.
- Taking a table lock makes them go away (obviously), but who would want to do that?
- Taking a pessimistic application lock (using sp_getapplock) works, but that's pretty much the same thing as the table lock, don't want to do that.
- Adding the READPAST hint to the XLOCK hint made no difference
- I have turned off PageLock on the index and PK, no difference
- I have added ROWLOCK hint to the XLOCK hint, made no difference
Some side note on NHibernate:
The way it is used and I understand it works is that it caches the sql statements until it really finds it necessary to execute them, unless you call flush, which we're trying not to do. So most of the statements (e.g. the lazily loaded Aggregate list of VisitItems => Visit.VisitItems) are executed only when necessary. Most of the actual update and delete statements from my transaction get executed at the end when the transaction is committed (as is evident from the sql trace above). I really have no control over the execution order; NHibernate decides when to do what. My initial lock statement is really only a work-around.
Also, with the lock statement, I'm just reading the items into an unused list (I'm not trying to override the VisitItems list on the Visit object since that's not how NHibernate is supposed to work as far as I can tell).
So even though I read the list first with the custom statement, NHibernate will still load the list again into its proxy object collection Visit.VisitItems using a separate sql call that I can see in the trace when it's time to lazily load it somewhere.
But that shouldn't matter, right? I already have the lock on said key? Loading it again won't change that?
As a final note, maybe to clarify: Each process is adding its own Visit with VisitItems first, then goes in and modifies it (which will trigger the delete and insert and the deadlock). In my tests, there is never any process changing the exact same Visit or VisitItems.
Does anybody have an idea on how to approach this any further? Anything I can try to get around this in an smart way (no table locks etc)? Also, I would like to learn why this tripple-x lock is even possible on the same object. I don't understand.
Please let me know if any more information is required to solve the puzzle.
**[EDIT]**
I updated the question with the DDL for the two tables involved.
Also I was asked for clarification on the expectation:
Yes, a few deadlocks here and there are ok, we'll just retry or get the user to re-submit (generally speaking). But at the current frequency with 12 concurrent users, I would expect there to only be one every few hours at most. Currently they pop up multiple times per minute.
In addition to that, I got some more information on the trancount=2, which might indicate a problem with nested transactions, which we're not really using. I will investigate that, too, and document the results here.

Ben
(411 rep)
Dec 15, 2015, 06:59 AM
• Last activity: Oct 20, 2018, 10:00 AM
0
votes
1
answers
1051
views
Query fails from application but runs fine in SSMS
I have a query that runs fine when executed from SSMS but when done so from the application, it fails with a (database level) exception. select ksprintid from kshistory_akt h where cast(h.serialno as decimal(28)) >= 1 and h.sku = '11105031' The application error is System.Data.SqlClient.SqlException...
I have a query that runs fine when executed from SSMS but when done so from the application, it fails with a (database level) exception.
select ksprintid
from kshistory_akt h
where cast(h.serialno as decimal(28)) >= 1
and h.sku = '11105031'
The application error is
System.Data.SqlClient.SqlException: Error converting data type varchar to numeric.
bei System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
bei System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
bei System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
bei System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
bei System.Data.SqlClient.SqlDataReader.TryHasMoreRows(Boolean& moreRows)
bei System.Data.SqlClient.SqlDataReader.TryReadInternal(Boolean setTimeout, Boolean& more)
bei System.Data.SqlClient.SqlDataReader.Read()
bei NHibernate.Driver.NHybridDataReader.Read()
bei NHibernate.Loader.Loader.DoQuery(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies, IResultTransformer forcedResultTransformer)
bei NHibernate.Loader.Loader.DoQueryAndInitializeNonLazyCollections(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies, IResultTransformer forcedResultTransformer)
bei NHibernate.Loader.Loader.DoList(ISessionImplementor session, QueryParameters queryParameters, IResultTransformer forcedResultTransformer)
The code that executes it:
ISessionFactory _factory = new Configuration().Configure().BuildSessionFactory();
CurrentSessionContext.Bind(_factory.OpenSession());
IList printedJobs = _factory.GetCurrentSession()
.CreateSQLQuery("select 1 from kshistory_akt h where cast(h.serialno as decimal(28)) >= :start and h.sku = :material")
.SetString("material", "11105031")
.SetParameter("start", 1)
.List();
But, the weird thing is that the result set of
select ksprintid
from kshistory_akt h
where h.sku = '11105031'
(without the actual cast) is **empty**:
So - how can an empty result cause an conversion error? And why doesn't it happen from within SSMS?
And to top it off: This only happens on **one** of three servers where this application runs. Every other server happily runs the command from the application without any errors (as it should be, because empty results should not cause problems when converting, *right*?)
**SSMS Connection options**
ANSI_WARNINGS
ANSI_PADDING
ANSI_NULLS
ARITHABORT
QUOTED_IDENTIFIER
ANSI_NULL_DFLT_ON
CONCAT_NULL_YIELDS_NULL
**Client Connection options**
ANSI_WARNINGS
ANSI_PADDING
ANSI_NULLS
QUOTED_IDENTIFIER
ANSI_NULL_DFLT_ON
CONCAT_NULL_YIELDS_NULL

F.P
(207 rep)
Apr 10, 2017, 12:11 PM
• Last activity: Apr 10, 2017, 01:16 PM
4
votes
1
answers
358
views
Performance problems with TOP and user defined functions in views
I have problem with performance of queries on simple query on view. To keep things simple, assume my view is defined as follows: CREATE VIEW [mon].[ViewDevicesWithGroups] AS SELECT Id, Name, Deleted, mon.SlowFunction(Id) AS Groups, FROM mon.Devices And query I'm trying to optimize looks like this: S...
I have problem with performance of queries on simple query on view.
To keep things simple, assume my view is defined as follows:
CREATE VIEW [mon].[ViewDevicesWithGroups]
AS
SELECT Id, Name, Deleted, mon.SlowFunction(Id) AS Groups,
FROM mon.Devices
And query I'm trying to optimize looks like this:
SELECT TOP 10
Id218_0_, Groups, Name218_0_, DeletedD6_218_0_ FROM (
SELECT this_.Id as Id218_0_, this_.Name as Name218_0_, this_.Deleted as Deleted218_0_, Groups, ROW_NUMBER() OVER(ORDER BY this_.Name) as __hibernate_sort_row
FROM mon.ViewDevicesWithGroups this_
WHERE (this_.Deleted = 1 and not (1=0))) AS query
WHERE query.__hibernate_sort_row > 100000
ORDER BY query.__hibernate_sort_row
My problem is, SQL server keeps executing this query in REALLY wrong order - SlowFunction is really slow, and should be delayed as long as possible:
As you can see, this query was generated by ORM (NHibernate in this case), and I can't change it directly (otherwise this problem would be trivial to solve by just moving function call to outermost select).
Can performance of my query be improved without changes in application code (maybe I kind of hint on view/function)? Because that query is abstracted deeply inside DAL in my application, and changing it would require A LOT of work.

MSM
(113 rep)
Aug 12, 2015, 12:56 PM
• Last activity: Aug 12, 2015, 06:24 PM
6
votes
1
answers
771
views
Which parameters was a stored procedure or command executed with?
We are using `nHibernate` which generates queries from `.NET` code. Occasionally, some queries get stuck in the `sp_whoisactive` list and we cannot find out why. I have a feeling that it has to do with a missing or broken parameter. When I run `sp_whoisactive`, however, I only see the parameter name...
We are using
nHibernate
which generates queries from .NET
code. Occasionally, some queries get stuck in the sp_whoisactive
list and we cannot find out why. I have a feeling that it has to do with a missing or broken parameter. When I run sp_whoisactive
, however, I only see the parameter name (in example: @p0
), not the actual value. Is there a way to retrieve this information while the process is still running?
Example:
SELECT
this_.Id as Id6_2_, this_.HRNumber as HRNumber6_2_,
this_.FirstName as FirstName6_2_, this_.LastName as LastName6_2_,
this_.StatusTypeID as StatusTy5_6_2_,
this_.PropertyId as PropertyId6_2_, this_.DepartmentGroupId as Departme7_6_2_,
property2_.PropertyID as PropertyID10_0_,
property2_.PropertyCode as Property2_10_0_,
property2_.LMSCode as LMSCode10_0_, property2_.PropertyName as Property4_10_0_,
property2_.Report as Report10_0_, department3_.Id as Id4_1_,
department3_.Name as Name4_1_, department3_.DisplayName as DisplayN3_4_1_
FROM
[dbo].[EmployeeDepartmentGroupView] this_
LEFT OUTER JOIN
[dbo].[Property] property2_ ON this_.PropertyId=property2_.PropertyID
LEFT OUTER JOIN
[dbo].[DepartmentGroup] department3_ ON this_.DepartmentGroupId=department3_.Id
WHERE
this_.HRNumber LIKE @p0;
Please note, this is automatically generated SQL from nHibernate
.
Specs: SQL Server 2008r2
Mario Tacke
(175 rep)
Apr 28, 2014, 05:19 PM
• Last activity: Jan 20, 2015, 07:46 PM
4
votes
1
answers
3175
views
Paging in SQL Server: Is an order by clause required? (currently, rows are being missed)
We have some code that pages through a SQL result. (Currently running on SQL Server 2008) We notice that when paging is done, some rows are not returned. Let me clarify this a bit: nHibernate generates SQL queries. We are doing paging. If we page by 100, the way nHibernate generates the successive S...
We have some code that pages through a SQL result. (Currently running on SQL Server 2008)
We notice that when paging is done, some rows are not returned. Let me clarify this a bit:
nHibernate generates SQL queries.
We are doing paging.
If we page by 100, the way nHibernate generates the successive SQL queries is:
- TOP 100 // gives us first 100
- TOP 200 // gives us 2nd 100 of this block
- etc
The above, without an ORDER BY / sorting at nHibernate level, has the end result that some rows never surface to nHibernate. We surmise this is due arbitrary sorting by SQL, so that rows are "moving around" within the pages (and thus "hide" from our application code).
If we do the nHibnerate query as a single shot (returning all rows), we see all the data. (This query below is generated by nhibernate.)
Will adding an order by clause (aka nHibernate sorting) help?
SELECT top 33
...
FROM
salesOrder this_ left outer join [Item] Item2_ on this_.ItemId=Item2_.ItemId
WHERE this_.AccountId = @p0
and this_.ModifiedAt > @p1
and this_.ModifiedAt <= @p2
Jonesome Reinstate Monica
(3489 rep)
Oct 29, 2014, 01:31 PM
• Last activity: Oct 29, 2014, 09:48 PM
4
votes
2
answers
311
views
removal of GUIDS used for all PK's, FKs & Clustered indexes
Inherited control over a database that has this nasty config. It also has large sections of code generated by NHibernate, including the gneeration of GUIDs before they get to the db so no chance of using NEWSEQUENTIALID() either. Obviously changing some of these sounds like a huge piece of work, but...
Inherited control over a database that has this nasty config. It also has large sections of code generated by NHibernate, including the gneeration of GUIDs before they get to the db so no chance of using NEWSEQUENTIALID() either.
Obviously changing some of these sounds like a huge piece of work, but changing the Lookups doesn't sound too painful. I've documented the fragmentation and CL width and NC indexes built from these, optimizer choosing incorrect plans, etc.. but tasked with a workaround i'm coming up a bit short.
My current plan is to investigate how much is possible to change and in the interim add an identity field to the main heavily indexed tables and make that the clustered index and keep the GUIDs as PK's. Is this the best I can hope for? or not even worth it? any other workarounds that i have omitted ?
Thanks
DamagedGoods
(2591 rep)
Jan 9, 2014, 05:26 PM
• Last activity: Jul 9, 2014, 10:41 AM
0
votes
2
answers
4367
views
Join two subquery in hql
How can I write this sql query in as hql: select count(distinct s.id) as prepaid, count(distinct ss.id) as postpaid from (select * from subscriber where subscriber.ispostpaid=0) s join (select * from subscriber where ispostpaid=1) ss where s.subscriber_status='active'; Or can make this query without...
How can I write this sql query in as hql:
select count(distinct s.id) as prepaid, count(distinct ss.id) as postpaid
from (select * from subscriber where subscriber.ispostpaid=0) s
join (select * from subscriber where ispostpaid=1) ss
where s.subscriber_status='active';
Or can make this query without using join by using only subquery.
Dev Sharma
(11 rep)
Feb 12, 2014, 11:12 AM
• Last activity: Feb 13, 2014, 11:22 AM
2
votes
0
answers
358
views
Does Oracle ADO.NET varchar parameter size influence query plan reuse?
I am considering to apply a fix which will generate constant parameter sizes for (n)varchar parameters. The application is based on NHibernate with the old OracleClientDriver (using Microsoft Oracle driver, ODP.NET can't be used in the near future), and NHibernate runs on top of ADO.NET. This driver...
I am considering to apply a fix which will generate constant parameter sizes for (n)varchar parameters. The application is based on NHibernate with the old OracleClientDriver (using Microsoft Oracle driver, ODP.NET can't be used in the near future), and NHibernate runs on top of ADO.NET.
This driver creates parameterized SQL with the size of (n)varchar parameters set to the actual string size, which, of course, varies:
select * from People where Name=:p0; :p0 = 'John' [Type: String (4)]
select * from People where Name=:p0; :p0 = 'James' [Type: String (5)]
Doing this with SQL Server would create a huge number of query plans, one for each combination of string sizes (when multiple string parameters), which is of course highly inefficient. I always thought it's the same in Oracle, but now I've got doubt.
I applied a fix to the NHibernate driver, which would send always the max parameter size, e.g. if the above *Name* column was 32 chars wide, it would always send [Type: String (32)]. My code does not use dedicated Prepare statements, but sends parameterized SQL together with the values (similar to calling EXECUTE IMMEDIATE).
I then looked at the statement (actually an INSERT) in Oracle Enterprise Manager, and the old version did not appear in the duplicate queries list. The statement itself showed a parse for each call (total of more than 1000 after some testing), but few hard parses. Thus, I could see no difference in performance between the fixed and the variable varchar length. Does this mean the fix is futile, and different query plans for different parameter sizes occur only in SQL Server (and maybe other DBMS)? Does Oracle check only for SQL string equality, but not for equality in parameter size settings?
I also noticed that duplicate SQL was almost only found where parameter values were concatenated into the SQL string, instead of using bind parameters.
EDIT: cursor_sharing in Oracle is set to EXACT.
Erik Hart
(510 rep)
Jan 28, 2014, 12:36 PM
• Last activity: Jan 28, 2014, 09:43 PM
3
votes
1
answers
187
views
How can I monitor a production database to see if there are table scans happening?
How can I monitor a production database to see if there are table scans happening? And what indexes might fix them... I'm told that we have adequate indexes but I can't see every code path, and we have too many people touching the codebase (hey, lots of developers can sometimes be a good thing) so w...
How can I monitor a production database to see if there are table scans happening? And what indexes might fix them...
I'm told that we have adequate indexes but I can't see every code path, and we have too many people touching the codebase (hey, lots of developers can sometimes be a good thing) so what's the most comprehensive way to see how if we've missed indexes based on our production load?
SQL Server 2008 R2, C#/ASP.NET code, NHibernate are the key factors in use.
jcolebrand
(6376 rep)
Sep 12, 2013, 07:44 PM
• Last activity: Sep 12, 2013, 09:20 PM
Showing page 1 of 12 total questions