Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
7
votes
2
answers
375
views
Does OPTIMIZE FOR UNKNOWN do anything for table-valued variables/parameters?
I have a query that is both prone to parameter sensitivity and is suffering from its table-valued parameter. I'm lazy and just want to solve this with query hints. When I'm lazy, I can solve parameter sensitivity with `OPTION (OPTIMIZE FOR UNKNOWN)`. This gives me three very tightly related question...
I have a query that is both prone to parameter sensitivity and is suffering from its table-valued parameter. I'm lazy and just want to solve this with query hints. When I'm lazy, I can solve parameter sensitivity with
OPTION (OPTIMIZE FOR UNKNOWN)
. This gives me three very tightly related questions:
1. Does OPTION (OPTIMIZE FOR UNKNOWN)
have any impact on table-valued **variables**?
2. Does OPTION (OPTIMIZE FOR UNKNOWN)
have any impact on table-valued **parameters**?
3. What query hints other than OPTION(RECOMPILE)
can be used to influence cardinality estimates for table-valued **parameters**?
Both my own experiments and the documentation suggest that this is a lost cause. Table-valued parameters are subject to parameter sniffing, but all that gets sniffed is their cardinality so OPTION (OPTIMIZE FOR UNKNOWN)
ought to do nothing.
As for the question of using other table hints, I know that I can change the estimate for operations on the table-valued parameter (e.g. OPTION (USE HINT ('ASSUME_MIN_SELECTIVITY_FOR_FILTER_ESTIMATES')
) but I do not think that hints can influence the cardinality estimate for the parameter itself.
J. Mini
(1235 rep)
Apr 29, 2025, 06:19 PM
• Last activity: May 2, 2025, 06:35 AM
0
votes
3
answers
943
views
Why Do Primary Keys on Temp Tables Improve Performance Much More Than On Table Variables?
Many of my reports are essentially pivot tables. I take a set of big tables and transform each table individually so that a particular column (let's call it `CaseID`) that wasn't a primary key in the original tables will be a primary key in the output (e.g. use `ROW_NUMBER() OVER([...])` and filter...
Many of my reports are essentially pivot tables. I take a set of big tables and transform each table individually so that a particular column (let's call it
CaseID
) that wasn't a primary key in the original tables will be a primary key in the output (e.g. use ROW_NUMBER() OVER([...])
and filter for where that equals 1). Let's call each output a "subtable". I then join these subtables together on the primary key to make my final output.
Intuitively, telling the optimiser that CaseID
is a primary key of each subtable (e.g. by storing the subtable in a temp table that is explicitly defined as having CaseID
as a primary key) should give a massive performance boosts and promote the usage of merge joins. In practice, I only see this benefit when the subtables are temp tables. When I do the same with a table variable, the performance that I get is no better than saying nothing about keys and just making the subtables CTEs. OPTION(RECOMPILE)
makes no difference. My temp tables are not memory optimised.
I am aware that table variables do not hold statistics, but **it is as if giving a table variable a clustered primary key does nothing for performance, despite it being awesome for temp tables**. Why is this? Do I have a misconception about what "does not hold statistics" means?
I have seen this exact problem in multiple contexts, so I have not given any code example. It makes little sense to give a practical example when my problem has occurred so many times that it is clear that I am ignorant of a matter of theory.
J. Mini
(1235 rep)
Feb 12, 2024, 09:53 PM
• Last activity: Feb 21, 2024, 08:02 AM
0
votes
0
answers
61
views
Do Primary Keys On Table Variables Have No Statistics At All?
My confusion is down to an apparent contradiction in principles. Primary keys are indexes and those always have statistics, but table variables notoriously always don't have statistics. What follows is an experiment to determine which principle wins out. Run the following in a very quiet database. L...
My confusion is down to an apparent contradiction in principles. Primary keys are indexes and those always have statistics, but table variables notoriously always don't have statistics. What follows is an experiment to determine which principle wins out. Run the following in a very quiet database. Let's assume SQL Server 2019.
USE tempdb
SELECT COUNT(*) FROM sys.stats AS stat
CROSS APPLY sys.dm_db_stats_properties(stat.object_id, stat.stats_id) props
DECLARE @FOO TABLE (BAR INT NOT NULL PRIMARY KEY CLUSTERED);
INSERT @FOO (BAR) VALUES (1), (2), (3);
SELECT COUNT(*) FROM sys.stats AS stat
CROSS APPLY sys.dm_db_stats_properties(stat.object_id, stat.stats_id) props
And you will find that you have the same number outputted twice, implying that your primary key has no statistics.
Now repeat this with a temp table.
USE tempdb
SELECT COUNT(*) FROM sys.stats AS stat
CROSS APPLY sys.dm_db_stats_properties(stat.object_id, stat.stats_id) props
CREATE TABLE #FOO_TEMP (BAR INT NOT NULL PRIMARY KEY CLUSTERED);
INSERT #FOO_TEMP (BAR) VALUES (1), (2), (3);
SELECT COUNT(*) FROM sys.stats AS stat
CROSS APPLY sys.dm_db_stats_properties(stat.object_id, stat.stats_id) props
And you will find that your number of rows has changed, implying that your primary key has statistics.
**Does this prove that primary keys on table variables have no statistics but those on temp tables do?**
J. Mini
(1235 rep)
Feb 17, 2024, 09:43 PM
4
votes
4
answers
2343
views
Effect on execution plans when a table variable has a primary key
Having read a great deal about the differences between temporary tables and table variables in SQL Server, I am experimenting with switching from mostly using temporary tables to mostly using table variables. (They seem to be a better fit for the types of queries I usually work with.) In these queri...
Having read a great deal about the differences between temporary tables and table variables in SQL Server, I am experimenting with switching from mostly using temporary tables to mostly using table variables. (They seem to be a better fit for the types of queries I usually work with.)
In these queries, the tables hold unique identifiers that drive the lookup process. It's been my habit, when working with temporary tables, to include a
PRIMARY KEY
constraint so that the query optimizer is aware that it won't see any duplicates. However, given that the optimizer (in most circumstances, and for my queries) assumes that a table variable only holds a single row*, which is unique by definition, is the query optimizer going to make choices any differently if there's a PRIMARY KEY
constraint?
\* Technically, it assumes there are no rows, but replaces the zero with a one. (Because the zero interacts very poorly with the rest of the estimation process.) But it also depends on whether the table variable is populated or not when the query is compiled. There is some background information here: https://dba.stackexchange.com/questions/16385/whats-the-difference-between-a-temp-table-and-table-variable-in-sql-server/16386#16386 .
I'm currently using SQL Server 2014, but I would be curious if the behavior changes in newer versions.
---
As has been pointed out, a PRIMARY KEY
constraint comes with a clustered index that gives the query optimizer more choices on how to get data out of the table variable. I was aware of this and thinking about the rest of the query plan. But after attempting to clarify my question, I've decided that the question I was attempting to ask was too broad and probably particular to my extreme situation. (Nothing but navigational-type queries into half-a-trillion-row tables with an expectation of sub-second performance.) So I am going to leave my question as-is.
sam.bishop
(494 rep)
Feb 15, 2017, 08:39 PM
• Last activity: Feb 13, 2024, 04:45 AM
0
votes
1
answers
642
views
Why use a memory-optimized table variable instead of a temp table or normal table variable?
Table variables have a bad reputation, mostly due to cardinality estimates. However, this reputation was earned before the introduction of memory-optimized table variables. [Microsoft covers some use-cases here](https://learn.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/faster-temp-ta...
Table variables have a bad reputation, mostly due to cardinality estimates. However, this reputation was earned before the introduction of memory-optimized table variables. [Microsoft covers some use-cases here](https://learn.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/faster-temp-table-and-table-variable-by-using-memory-optimization?view=sql-server-ver16) , but I'm not sold on the benefits. A normal table variable will go in to RAM very quickly, so I don't see how the general benefits of in-memory OLTP will apply for memory-optimized table variables. [This article](https://web.archive.org/web/20140111080518/http://blogs.technet.com/b/dataplatforminsider/archive/2014/01/07/sql-server-2014-in-memory-oltp-memory-optimized-table-types-and-table-variables.aspx) lists some benefits that apply, but its focus is clearly on listing differences rather than benefits.
In what circumstances would a memory-optimized table variable be preferred over a normal table variable or a temp table? To keep the playing ground even, **assume that we are working in a normal stored procedure that has no reference to any memory-optimized tables.** In other words, assume that my only usage of in-memory OLTP is for table variables. Furthermore, let's assume that we are on SQL Server 2022. Assuming a version like 2014 would make answers needlessly complicated.
J. Mini
(1235 rep)
Jan 13, 2024, 05:09 PM
• Last activity: Jan 16, 2024, 07:34 PM
491
votes
2
answers
262167
views
What's the difference between a temp table and table variable in SQL Server?
This seems to be an area with quite a few myths and conflicting views. So what is the difference between a table variable and a local temporary table in SQL Server?
This seems to be an area with quite a few myths and conflicting views.
So what is the difference between a table variable and a local temporary table in SQL Server?
Martin Smith
(88051 rep)
Apr 11, 2012, 10:22 PM
• Last activity: May 22, 2023, 09:23 PM
3
votes
1
answers
588
views
Reasons for not Globally enabling Trace Flag 2453 - Cardinality on Table Variables
I've been doing [some reading][1] on the improvements that Trace Flag 2453 can give on the performance of Table Variables by maintaining statistics and was wondering what would be the reasons you would advise to not turn this on globally. I understand there will be some overhead of recompiles if the...
I've been doing some reading on the improvements that Trace Flag 2453 can give on the performance of Table Variables by maintaining statistics and was wondering what would be the reasons you would advise to not turn this on globally.
I understand there will be some overhead of recompiles if the number of rows being used substantially differs - but to me, this would appear similar to any other query, and be preferred behaviour if it is caching better query plans.
Is there other reasons why this would not be preferred behaviour to generate statistics on table variables, similar to temp tables?
Primarily I'm thinking about replacing a few places where we use XML to bring in data sets and use Table-Valued-Parameters instead. We do use temp tables for the most part.
crucible
(307 rep)
Sep 24, 2018, 10:47 PM
• Last activity: Oct 12, 2021, 08:18 AM
1
votes
1
answers
222
views
Inconsistent results between MySQL and MariaDB when using variables and order
I'm building an accounting table and I need to show the current balance on each transaction. Everything is working fine in my local environment (Windows 10 + MySQL 5.7.19), but the same query gives me strange results on my VPS (Linux + MariaDB 10.2.31) and in SQL Fiddle (MySQL 5.6). First the code,...
I'm building an accounting table and I need to show the current balance on each transaction.
Everything is working fine in my local environment (Windows 10 + MySQL 5.7.19), but the same query gives me strange results on my VPS (Linux + MariaDB 10.2.31) and in SQL Fiddle (MySQL 5.6).
First the code, then the examples ([SQL Fiddle](http://sqlfiddle.com/#!9/1b3d75/4)) .
Tables:
CREATE TABLE transaction_types
(
id
bigint(20) UNSIGNED NOT NULL,
name
varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
slug
varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
amount
decimal(13,4) NOT NULL DEFAULT 0.0000,
interests
longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_bin DEFAULT NULL,
hourly
tinyint(1) NOT NULL DEFAULT 0,
salary
tinyint(1) NOT NULL DEFAULT 0,
income
tinyint(1) NOT NULL DEFAULT 1,
created_at
timestamp NULL DEFAULT NULL,
updated_at
timestamp NULL DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
INSERT INTO transaction_types
(id
, name
, slug
, amount
, interests
, hourly
, salary
, income
, created_at
, updated_at
) VALUES
(1, 'Alquiler', 'alquiler', '43000.0000', NULL, 0, 0, 0, '2021-06-07 09:52:49', '2021-06-10 03:26:17'),
(2, 'Campus Nube', 'campus-nube', '13500.0000', NULL, 0, 0, 0, '2021-06-07 09:52:49', '2021-06-10 03:26:33'),
(3, 'Impuestos', 'impuestos', '8400.0000', NULL, 0, 0, 0, '2021-06-07 09:52:49', '2021-06-10 03:26:47'),
(4, 'Marketing', 'marketing', '0.0000', NULL, 0, 0, 0, '2021-06-07 09:52:49', '2021-06-07 09:52:49'),
(5, 'Otros', 'otros', '0.0000', NULL, 0, 0, 0, '2021-06-07 09:52:49', '2021-06-07 09:52:49'),
(6, 'Sueldo', 'sueldo', '750.0000', NULL, 1, 1, 0, '2021-06-07 09:52:49', '2021-06-10 03:28:49'),
(7, 'Videollamadas', 'videollamadas', '0.0000', NULL, 0, 0, 0, '2021-06-07 09:52:49', '2021-06-07 09:52:49'),
(8, 'Cuota', 'cuota', '0.0000', NULL, 0, 0, 1, '2021-06-07 09:52:49', '2021-06-07 09:52:49'),
(9, 'Derecho a examen', 'derecho-a-examen', '0.0000', NULL, 0, 0, 1, '2021-06-07 09:52:49', '2021-06-07 09:52:49'),
(10, 'Materiales', 'materiales', '0.0000', NULL, 0, 0, 1, '2021-06-07 09:52:49', '2021-06-07 09:52:49'),
(11, 'Matrícula', 'matricula', '0.0000', NULL, 0, 0, 1, '2021-06-07 09:52:49', '2021-06-07 09:52:49'),
(12, 'Otros', 'otros-1', '0.0000', NULL, 0, 0, 1, '2021-06-07 09:52:49', '2021-06-07 09:52:49'),
(13, 'Salida didáctica', 'salida-didactica', '0.0000', NULL, 0, 0, 1, '2021-06-07 09:52:49', '2021-06-07 09:52:49'),
(14, 'Sueldo Secretaria', 'sueldo-secretaria', '25780.0000', NULL, 0, 1, 0, '2021-06-10 03:29:21', '2021-06-10 03:29:21'),
(15, 'Gas natural', 'gas-natural', '400.0000', NULL, 0, 0, 0, '2021-06-29 20:40:16', '2021-06-29 20:40:16');
CREATE TABLE transactions
(
id
bigint(20) UNSIGNED NOT NULL,
transaction_type_id
bigint(20) UNSIGNED NOT NULL,
payment_id
bigint(20) UNSIGNED DEFAULT NULL,
salary_id
bigint(20) UNSIGNED DEFAULT NULL,
division_id
bigint(20) UNSIGNED DEFAULT NULL,
user_id
bigint(20) UNSIGNED DEFAULT NULL,
amount
decimal(13,4) NOT NULL DEFAULT 0.0000,
discount
decimal(13,4) NOT NULL DEFAULT 0.0000,
interest
decimal(13,4) NOT NULL DEFAULT 0.0000,
total
decimal(13,4) NOT NULL DEFAULT 0.0000,
date
date NOT NULL,
comment
longtext COLLATE utf8mb4_unicode_ci DEFAULT NULL,
created_at
timestamp NULL DEFAULT NULL,
updated_at
timestamp NULL DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
INSERT INTO transactions
(id
, transaction_type_id
, payment_id
, salary_id
, division_id
, user_id
, amount
, discount
, interest
, total
, date
, comment
, created_at
, updated_at
) VALUES
(6, 8, NULL, NULL, 96, 81, '3200.0000', '0.0000', '0.0000', '3200.00', '2021-05-31', 'Cuotas Mayo', '2021-06-10 03:23:38', '2021-06-10 03:23:38'),
(7, 8, NULL, NULL, 96, 80, '3200.0000', '0.0000', '0.0000', '3200.00', '2021-05-31', 'Cuotas Mayo', '2021-06-10 03:23:38', '2021-06-10 03:23:38'),
(8, 8, NULL, NULL, 96, 96, '3200.0000', '0.0000', '0.0000', '3200.00', '2021-05-31', 'Cuotas Mayo', '2021-06-10 03:23:38', '2021-06-10 03:23:38'),
(9, 8, NULL, NULL, 96, 120, '3200.0000', '0.0000', '0.0000', '3200.00', '2021-05-31', 'Cuotas Mayo', '2021-06-10 03:23:38', '2021-06-10 03:23:38'),
(10, 8, NULL, NULL, 96, 31, '3200.0000', '0.0000', '0.0000', '3200.00', '2021-05-31', 'Cuotas Mayo', '2021-06-10 03:23:38', '2021-06-10 03:23:38'),
(11, 8, NULL, NULL, 96, 25, '3200.0000', '0.0000', '0.0000', '3200.00', '2021-05-31', 'Cuotas Mayo', '2021-06-10 03:23:38', '2021-06-10 03:23:38'),
(12, 8, NULL, NULL, 96, 35, '3200.0000', '0.0000', '0.0000', '3200.00', '2021-05-31', 'Cuotas Mayo', '2021-06-10 03:23:38', '2021-06-10 03:23:38'),
(13, 8, NULL, NULL, 96, 46, '3200.0000', '0.0000', '0.0000', '3200.00', '2021-05-31', 'Cuotas Mayo', '2021-06-10 03:23:38', '2021-06-10 03:23:38'),
(14, 8, NULL, NULL, 96, 202, '3200.0000', '0.0000', '0.0000', '3200.00', '2021-05-31', 'Cuotas Mayo', '2021-06-10 03:23:38', '2021-06-10 03:23:38'),
(15, 8, NULL, NULL, 96, 40, '3200.0000', '0.0000', '0.0000', '3200.00', '2021-05-31', 'Cuotas Mayo', '2021-06-10 03:23:38', '2021-06-10 03:23:38'),
(19, 8, NULL, NULL, 95, 111, '5000.0000', '700.0000', '900.0000', '5200.00', '2021-06-06', 'pago cuotas', '2021-06-18 20:50:08', '2021-06-18 20:50:08'),
(20, 8, NULL, NULL, 95, 68, '5000.0000', '700.0000', '900.0000', '5200.00', '2021-06-06', 'pago cuotas', '2021-06-18 20:50:08', '2021-06-18 20:50:08'),
(21, 7, NULL, NULL, NULL, NULL, '2000.0000', '0.0000', '0.0000', '2000.00', '2021-06-04', 'zoom', '2021-06-18 20:50:51', '2021-06-18 20:50:51'),
(36, 14, NULL, 4, NULL, 32, '29780.0000', '0.0000', '0.0000', '29780.00', '2021-06-05', 'aaa', '2021-06-24 16:44:34', '2021-06-24 16:44:34'),
(37, 6, NULL, 5, NULL, 105, '30000.0000', '0.0000', '0.0000', '30000.00', '2021-06-01', NULL, '2021-06-24 16:45:25', '2021-06-24 16:45:25'),
(43, 8, 14, NULL, 95, 63, '5000.0000', '0.0000', '0.0000', '5000.00', '2021-06-01', NULL, '2021-06-24 18:45:19', '2021-06-24 18:45:19'),
(81, 4, NULL, NULL, NULL, NULL, '4000.0000', '0.0000', '0.0000', '4000.00', '2021-06-02', 'asasas', '2021-06-29 18:43:49', '2021-06-29 18:43:49'),
(83, 6, NULL, 15, NULL, 89, '18250.0000', '0.0000', '0.0000', '18250.00', '2021-06-01', NULL, '2021-06-29 19:04:20', '2021-06-29 19:04:20');
Query:
SET @balance = 0;
SET @temp = 0;
SELECT SUM(IF(income
, +total
, -total
)) INTO @balance
FROM (
SELECT income
, total
FROM transactions
INNER JOIN transaction_types
ON transactions
.transaction_type_id
= transaction_types
.id
ORDER BY
date
desc,
transactions
.created_at
desc,
transactions
.id
asc
LIMIT 9223372036854775807 OFFSET 0
) AS sub
;
SELECT
transactions
.id
,
transactions
.date
,
transactions
.total
,
transaction_types
.name
AS typeName
,
transaction_types
.income
,
@balance := (@balance - @temp) AS balance
,
@temp := IF(income
, +total
, -total
) AS temp
FROM transactions
INNER JOIN transaction_types
on transactions
.transaction_type_id
= transaction_types
.id
ORDER BY
date
desc,
transactions
.created_at
desc,
transactions
.id
asc;
This is the expected output (local):
| id | date | income | total | balance | temp |
|---:|------------|-------:|-----------:|------------:|------------:|
| 19 | 2021-06-06 | 1 | 5200.0000 | -36630.0000 | 5200.0000 |
| 20 | 2021-06-06 | 1 | 5200.0000 | -41830.0000 | 5200.0000 |
| 36 | 2021-06-05 | 0 | 29780.0000 | -47030.0000 | -29780.0000 |
| 21 | 2021-06-04 | 0 | 2000.0000 | -17250.0000 | -2000.0000 |
| 81 | 2021-06-02 | 0 | 4000.0000 | -15250.0000 | -4000.0000 |
| 83 | 2021-06-01 | 0 | 18250.0000 | -11250.0000 | -18250.0000 |
| 43 | 2021-06-01 | 1 | 5000.0000 | 7000.0000 | 5000.0000 |
| 37 | 2021-06-01 | 0 | 30000.0000 | 2000.0000 | -30000.0000 |
| 6 | 2021-05-31 | 1 | 3200.0000 | 32000.0000 | 3200.0000 |
| 7 | 2021-05-31 | 1 | 3200.0000 | 28800.0000 | 3200.0000 |
| 8 | 2021-05-31 | 1 | 3200.0000 | 25600.0000 | 3200.0000 |
| 9 | 2021-05-31 | 1 | 3200.0000 | 22400.0000 | 3200.0000 |
| 10 | 2021-05-31 | 1 | 3200.0000 | 19200.0000 | 3200.0000 |
| 11 | 2021-05-31 | 1 | 3200.0000 | 16000.0000 | 3200.0000 |
| 12 | 2021-05-31 | 1 | 3200.0000 | 12800.0000 | 3200.0000 |
| 13 | 2021-05-31 | 1 | 3200.0000 | 9600.0000 | 3200.0000 |
| 14 | 2021-05-31 | 1 | 3200.0000 | 6400.0000 | 3200.0000 |
| 15 | 2021-05-31 | 1 | 3200.0000 | 3200.0000 | 3200.0000 |
This is what I'm getting (server):
| id | date | total | income | balance | temp |
|---:|------------|-----------:|-------:|------------:|------------:|
| 19 | 2021-06-06 | 5200.0000 | 1 | -14380.0000 | 5200.0000 |
| 20 | 2021-06-06 | 5200.0000 | 1 | -19580.0000 | 5200.0000 |
| 36 | 2021-06-05 | 29780.0000 | 0 | -29780.0000 | -29780.0000 |
| 21 | 2021-06-04 | 2000.0000 | 0 | 15620.0000 | -2000.0000 |
| 81 | 2021-06-02 | 4000.0000 | 0 | -36630.0000 | -4000.0000 |
| 83 | 2021-06-01 | 18250.0000 | 0 | -2630.0000 | -18250.0000 |
| 43 | 2021-06-01 | 5000.0000 | 1 | -24780.0000 | 5000.0000 |
| 37 | 2021-06-01 | 30000.0000 | 0 | -32630.0000 | -30000.0000 |
| 6 | 2021-05-31 | 3200.0000 | 1 | 17620.0000 | 3200.0000 |
| 7 | 2021-05-31 | 3200.0000 | 1 | 14420.0000 | 3200.0000 |
| 8 | 2021-05-31 | 3200.0000 | 1 | 11220.0000 | 3200.0000 |
| 9 | 2021-05-31 | 3200.0000 | 1 | 8020.0000 | 3200.0000 |
| 10 | 2021-05-31 | 3200.0000 | 1 | 4820.0000 | 3200.0000 |
| 11 | 2021-05-31 | 3200.0000 | 1 | 1620.0000 | 3200.0000 |
| 12 | 2021-05-31 | 3200.0000 | 1 | -1580.0000 | 3200.0000 |
| 13 | 2021-05-31 | 3200.0000 | 1 | -4780.0000 | 3200.0000 |
| 14 | 2021-05-31 | 3200.0000 | 1 | -7980.0000 | 3200.0000 |
| 15 | 2021-05-31 | 3200.0000 | 1 | -11180.0000 | 3200.0000 |
[SQL Fiddle](http://sqlfiddle.com/#!9/1b3d75/4) gives me the same results if I have the typeName
column on the SELECT
, if I remove it, the result changes: [SQL Fiddle](http://sqlfiddle.com/#!9/1b3d75/5) . Maybe has something to do with the JOIN
?
If you read the first table from bottom to top, you will see that the temp
column (which is the total
signed) is being added to the balance
.
-36630.0000 is the total balance, and both tables have that number, but that number should be the first one.
If I delete some records, sometimes the query gives me the expected results, sometimes.
I have no idea what's going on.
azeós
(113 rep)
Jul 1, 2021, 06:11 PM
• Last activity: Jul 1, 2021, 08:17 PM
21
votes
2
answers
14789
views
Why must TVPs be READONLY, and why can't parameters of other types be READONLY
According to [this blog][1] parameters to a function or a stored procedure are essentially pass-by-value if they aren't `OUTPUT` parameters, and essentially treated as a safer version of pass-by-reference if they are `OUTPUT` parameters. At first I thought the goal of forcing TVP to be declared `REA...
According to this blog parameters to a function or a stored procedure are essentially pass-by-value if they aren't
The parameter "@a" can not be declared READONLY since it is not a table-valued parameter. 1. Since statistics aren't stored on TVP what is the rationale behind preventing DML operations? 2. Is it related to not wanting TVP to be
OUTPUT
parameters, and essentially treated as a safer version of pass-by-reference if they are OUTPUT
parameters.
At first I thought the goal of forcing TVP to be declared READONLY
was to clearly signal to developers that the TVP can't be used as an OUTPUT
parameter, but there must be more going on because we can't declare non-TVP as READONLY
. For example the following fails:
create procedure [dbo].[test]
@a int readonly
as
select @a
> Msg 346, Level 15, State 1, Procedure test
The parameter "@a" can not be declared READONLY since it is not a table-valued parameter. 1. Since statistics aren't stored on TVP what is the rationale behind preventing DML operations? 2. Is it related to not wanting TVP to be
OUTPUT
parameters for some reason?
Erik
(4833 rep)
Nov 11, 2015, 04:04 PM
• Last activity: Mar 17, 2021, 11:53 AM
0
votes
1
answers
294
views
Subquery with variable in where
I don't know why but if I do the `subquery` where in the first `LEFT JOIN` with the value of the variable `@id` it does not return any results, it is as if the value of `@id` was always null. **But why not do the `LEFT JOIN` directly?** Simple, the table is large and the query is taking horrors to e...
I don't know why but if I do the
subquery
where in the first LEFT JOIN
with the value of the variable @id
it does not return any results, it is as if the value of @id
was always null.
**But why not do the LEFT JOIN
directly?**
Simple, the table is large and the query is taking horrors to execute, so it would be much faster.
SELECT
DISTINCT @id := p.id_pen,
p.id_env_pen,
0 AS id_star,
FALSE AS is_star,
u.id_user AS sender_id,
u.name_user AS sender_name,
u2.id_user AS response_id,
u2.name_user AS response_name,
p.text_pen AS text_send,
p2.text_pen AS text_response,
DATE_FORMAT(p.created_at_pen, '%Y-%m-%d %H:%i:%s') AS send_datetime,
DATE_FORMAT(p2.created_at_pen, '%Y-%m-%d %H:%i:%s') AS response_datetime,
p.time_res_pen,
p.state_pen,
p.created_at_pen
FROM pendency AS p
LEFT JOIN (
SELECT
p2.id_pen,
p2.text_pen,
p2.created_at_pen,
p2.id_pen_req_pen
FROM pendency AS p2
WHERE p2.id_pen_req_pen = @id
# HERE, I try do = p.id_pen
but dont works too,
# id not identify the alias from outside
) AS p2 ON p2.id_pen_req_pen = @id
INNER JOIN users AS u ON u.id_user = p.id_user_pen
LEFT JOIN users AS u2 ON u2.id_user = p.id_user_req_pen
WHERE (p.id_user_pen = '5' AND p.id_user_req_pen != -10)
OR p.id_user_req_pen = '5'
CREATE TABLES
CREATE TABLE IF NOT EXISTS users
(
id_user
INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
id_fs_user
VARCHAR(30) DEFAULT 'Não informado',
email_user
VARCHAR(105) DEFAULT 'Não informado',
pass_user
VARCHAR(35) DEFAULT 'Não informado',
name_user
VARCHAR(120) DEFAULT 'Não informado',
cpf_user
VARCHAR(15) DEFAULT 'Não informado',
type_user
VARCHAR(15) DEFAULT 'master',
work_week_user
INT DEFAULT 5,
is_block_user
BOOLEAN DEFAULT FALSE,
permission_id_user
INT NOT NULL DEFAULT 0,
attendant_user
BOOLEAN DEFAULT FALSE,
version_id_user
INT NOT NULL DEFAULT 0,
last_login_user
DATETIME,
last_move_user
DATETIME,
id_user_user
INT DEFAULT 0,
created_at_user
DATETIME DEFAULT CURRENT_TIMESTAMP,
safe_del_date_user
DATETIME DEFAULT CURRENT_TIMESTAMP,
safe_del_state_user
BOOLEAN DEFAULT FALSE,
safe_del_user_user
INT DEFAULT 0);
CREATE TABLE IF NOT EXISTS pendency
(
id_pen
INT NOT NULL PRIMARY KEY AUTO_INCREMENT,
id_env_pen
INT NOT NULL,
text_pen
TEXT,
id_user_req_pen
INT DEFAULT 0,
id_pen_req_pen
INT DEFAULT 0,
time_res_pen
INT NOT NULL,
state_pen
BOOLEAN DEFAULT FALSE,
id_old_pen
INT NOT NULL,
id_user_pen
INT NOT NULL,
created_at_pen
DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (id_env_pen
) REFERENCES env
(id_env
) ON DELETE CASCADE);
Woton Sampaio
(63 rep)
Feb 12, 2021, 02:31 AM
• Last activity: Feb 12, 2021, 05:18 AM
0
votes
1
answers
638
views
How to use query result as REPLACE parameters
i'm trying to: 1- Find a string inside a lot of content 2- Replace that specific part of the string with a new one using REPLACE function like this: REPLACE(column_name, query#1, query#2) Is this possible? I already have located the exact part of the content that i want to replace SELECT `post_title...
i'm trying to:
1- Find a string inside a lot of content
2- Replace that specific part of the string with a new one using REPLACE function like this:
REPLACE(column_name, query#1, query#2)
Is this possible?
I already have located the exact part of the content that i want to replace
SELECT
post_title
, SUBSTRING(post_content
, LOCATE('[flv:',post_content
), LOCATE(']',post_content
) - LOCATE('[flv:',post_content
) + 1) AS amazonlinks FROM wp_posts_duplicatedfortestonly WHERE post_content
LIKE '%[flv:%'
and have the new string ready in another table column: Png_Link
Finally i'm trying to merge all this using the REPLACE function but it's giving me a lot of errors:
UPDATE wp_posts_duplicatedfortestonly
SET
post_content = REPLACE(post_content,
(SELECT SUBSTRING(post_content
, LOCATE('[flv:',post_content
), LOCATE(']',post_content
) - LOCATE('[flv:',post_content
) + 1) FROM (SELECT post_content FROM wp_posts_duplicatedfortestonly) AS something WHERE post_content
LIKE '%[flv:%'),
(SELECT Png_Link
FROM VideoPlayerUdpateLinksReady
WHERE Mp4_Link
!= ''));
Something like this (only difference is i'm using 2 tables): https://www.db-fiddle.com/f/91EW2kJDc8aWJYYvw86vG9/0
Also tried with variables without any luck:
SET @tblname := "wp_posts_duplicatedfortestonly";
SET @colname := "post_content";
SELECT @lookfor := SUBSTRING(post_content
, LOCATE('[flv:',post_content
), LOCATE(']',post_content
) - LOCATE('[flv:',post_content
) + 1) FROM @tblname WHERE post_content
LIKE '%[flv:%';
SELECT @replacewith := Png_Link
FROM VideoPlayerUdpateLinksReady
WHERE Mp4_Link
!= '';
SET @qry = CONCAT('UPDATE ',@tblname, ' SET ', @colname, ' = REPLACE(' ,@colname, ',', @lookfor, ',',@replacewith,')');
PREPARE QUERY FROM @qry;
EXECUTE QUERY;
What am i'm doing wrong? Is it possible to use SELECT expressions as REPLACE function parameters or what other workaround do i have?
gallo2000sv
(27 rep)
Jul 30, 2019, 06:02 PM
• Last activity: Aug 1, 2019, 10:11 PM
8
votes
3
answers
1614
views
Using SPID in DB Tables (instead of Table Variable)
Transactional database used for booking things... Our vendor was asked to replace #temptables with @tablevariables (because of heavy compile locks) but instead they replaced with an actual table that adds SPID as a column to ensure the stored procedure only acts on the applicable rows. Do you see an...
Transactional database used for booking things...
Our vendor was asked to replace #temptables with @tablevariables (because of heavy compile locks) but instead they replaced with an actual table that adds SPID as a column to ensure the stored procedure only acts on the applicable rows.
Do you see any risk in this method of operation? Before all transactions were isolated within their own transaction... I worried we may end up locking this table a bunch but their opinion is that SQL uses row-level locking and this won't create more locks.
SQL Server Version: 2016 Enterprise - 13.0.5216.0
CREATE TABLE dbo.qryTransactions (
ID int IDENTITY (0,1) NOT NULL CONSTRAINT pk_qryTransactions PRIMARY KEY CLUSTERED,
spid int NOT NULL,
OrderID int,
ItemID int,
TimeTransactionStart datetime,
TimeTransactionEnd datetime,
...other fields
)
CREATE INDEX idx_qryTransactions_spidID ON qryTransactions (spid, ID) INCLUDE (ItemID, OrderID, TimeTransactionStart, TimeTransactionEnd)
outjet
(183 rep)
Jul 17, 2019, 04:39 PM
• Last activity: Jul 18, 2019, 01:48 PM
-1
votes
1
answers
284
views
use a table as a variable
I was wondering if there was a way to set a table name in SQL to be a variable. I have a list of queries that hit the same table and didn't want to have to update them all so was hoping to declare the table name as a value. Below is an example: what I wanted to do is declare the table name once and...
I was wondering if there was a way to set a table name in SQL to be a variable.
I have a list of queries that hit the same table and didn't want to have to update them all so was hoping to declare the table name as a value.
Below is an example: what I wanted to do is declare the table name once and use the @tablename in the queries shown below.
/*at the field level */
SELECT
(
SELECT COUNT(DISTINCT PROVIDER_TIN) AS PROVIDER_TIN
from [Coloma].[dbo].Aetna_Medicare_2019 a
WHERE PROVIDER_TIN = '' OR PROVIDER_TIN IS NULL
) AS NULL_TINS,
(
SELECT COUNT(DISTINCT PROVIDER_ID) AS PROVIDER_ID
from [Coloma].[dbo].Aetna_Medicare_2019 a
WHERE PROVIDER_ID = '' OR PROVIDER_ID IS NULL
) AS NULL_IDS
Newelle Horn
(21 rep)
Apr 11, 2019, 04:19 AM
• Last activity: Apr 11, 2019, 05:23 AM
7
votes
2
answers
1014
views
how to create a data type and make it available in all Databases?
If I create a stored procedure in the master database, and I want to execute it from any of my databases I just follow this link: [Making a Procedure Available in all Databases][1] that give me this code example: [![enter image description here][2]][2] Just by following the example above, I can call...
If I create a stored procedure in the master database, and I want to execute it from any of my databases I just follow this link:
Making a Procedure Available in all Databases
that give me this code example:
Just by following the example above, I can call my procedure from any database.
what about if I create a table data type in master,
how can I use it in any of my databases?
use master
IF NOT EXISTS (select * from sys.types where name = 'theReplicatedTables')
CREATE TYPE theReplicatedTables AS TABLE
( OBJ_ID INT NOT NULL,
PRIMARY KEY CLUSTERED (OBJ_ID)
);
use APIA_Repl_Sub
go
declare @the_tables [dbo].[theReplicatedTables]


Marcello Miorelli
(17274 rep)
Apr 7, 2019, 11:08 PM
• Last activity: Apr 8, 2019, 12:16 AM
8
votes
1
answers
1597
views
How to name table-variable function unique constraint?
I am renaming some unique constraints to match our database objects naming convention. Strangely, there are several multi-line table valued function which returned table has unique constraints as follows: CREATE FUNCTION [dbo].[fn_name] (...) RETURNS @Result TABLE ( ID BIGINT PRIMARY KEY, ... RowNum...
I am renaming some unique constraints to match our database objects naming convention. Strangely, there are several multi-line table valued function which returned table has unique constraints as follows:
CREATE FUNCTION [dbo].[fn_name] (...)
RETURNS @Result
TABLE
(
ID BIGINT PRIMARY KEY,
...
RowNum BIGINT UNIQUE
)
BEGIN
...
RETURN
END
GO
I have try to name it like this, but is not working:
CREATE FUNCTION [dbo].[fn_name] (...)
RETURNS @Result
TABLE
(
ID BIGINT PRIMARY KEY,
...
RowNum BIGINT
,CONSTRAINT UC_fn_name_RowNum UNIQUE([RowNum])
)
BEGIN
...
RETURN
END
GO
Is it possible to set the name of a unique constraint when it is part of table variable function definition?
gotqn
(4348 rep)
Dec 7, 2015, 01:56 PM
• Last activity: Mar 11, 2019, 08:27 AM
9
votes
1
answers
291
views
how come a table variable is improving the performance of a query in this circumstance?
for this specific case, that I will try to explain below, using a table variable is performing better than not using a table variable. I would like to know why, and if possible, get rid of the table variable. this is the query using the table variable: USE [BISource_UAT] GO set statistics io on SET...
for this specific case, that I will try to explain below, using a table variable is performing better than not using a table variable.
I would like to know why, and if possible, get rid of the table variable.
this is the query using the table variable:
USE [BISource_UAT]
GO
set statistics io on
SET STATISTICS TIME ON
SET NOCOUNT ON;
DECLARE @OrderStartDate DATETIME = '15-feb-2015'
DECLARE @OrderEndDate DATETIME = '28-feb-2016'
DECLARE @tmp TABLE
(
strBxOrderNo VARCHAR(20)
,sintReturnId INT
)
INSERT INTO @tmp
SELECT strBxOrderNo
,sintReturnId
FROM TABLEBACKUPS.dbo.tblBReturnHistory rh
WHERE rh.sintReturnStatusId in ( 3 )
AND rh.dtmAdded >= @OrderStartDate
AND rh.dtmAdded 0
AND o.sdtmOrdCreated >= @OrderStartDate
this produces the following stats:
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server parse and compile time:
CPU time = 78 ms, elapsed time = 86 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Table '#BF0B2154'. Scan count 0, logical reads 1957, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'tblBReturnHistory'. Scan count 1, logical reads 13, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 16 ms, elapsed time = 9 ms.
Table 'tblBGiftVoucherPromotion'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'tblBGiftVoucher'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'tblBOrderPaymentGiftVoucher'. Scan count 0, logical reads 452, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'tblBOrderItem'. Scan count 0, logical reads 904, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'tblBOrderPayment'. Scan count 186, logical reads 1649, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'tblBAccountParticipant'. Scan count 0, logical reads 7112, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'tblBOrder'. Scan count 3557, logical reads 14267, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'tblBOrderItemReturn'. Scan count 1951, logical reads 5865, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'tblBReturn'. Scan count 0, logical reads 3902, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table '#BF0B2154'. Scan count 1, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 125 ms, elapsed time = 138 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
using *showplan_text on* I would like to show the query plan:
first part of the query - populating the table variable
second part of the query:
using the table varible and joining the other tables:
This is the XML plan of the query using table variable.
now this is the same query NOT using a table variable:
USE [BISource_UAT]
GO
set statistics io on
SET STATISTICS TIME ON
SET NOCOUNT ON;
DECLARE @OrderStartDate DATETIME = '15-feb-2015'
DECLARE @OrderEndDate DATETIME = '28-feb-2016'
SELECT
op.lngPaymentID
,op.strBxOrderNo
,op.sintPaymentTypeID
,op.strCurrencyCode
,op.strBCCurrencyCode
,op.decPaymentAmount
,op.decBCPaymentAmount
,ap.strAccountCode
,o.sintMarketID
,o.sintOrderChannelID
,o.sintOrderTypeID
,CASE WHEN opgv.lngpaymentID IS NULL
THEN NULL -- Not a Voucher = Null
WHEN gvp.strIssuedBxOrderNo IS NULL
THEN 0 ELSE 1 END AS [IsPromoVoucher]
-- Is a Voucher - check type
,o.sdtmOrdCreated
FROM TABLEBACKUPS.dbo.tblBReturnHistory rh
INNER JOIN TABLEBACKUPS.dbo.tblBReturn r
ON r.sintReturnId = rh.sintReturnId
AND r.strBxOrderNo = rh.strBxOrderNo
INNER JOIN bocss2.dbo.tblBOrder o
ON o.strBxOrderNo = r.strBxOrderNo
AND o.sdtmOrdCreated >= @OrderStartDate
INNER JOIN Bocss2.dbo.tblBOrderPayment op
ON op.strBxOrderNo = o.strBxOrderNo
INNER JOIN TABLEBACKUPS.dbo.tblBOrderItemReturn AS oir
ON r.sintReturnId = oir.sintReturnID
AND r.strBxOrderNo = oir.strBxOrderNo
AND oir.decReturnFinalAmount > 0
INNER JOIN Bocss2.dbo.tblBOrderItem AS i
ON i.strBxOrderNo = oir.strBxOrderNo
AND i.sintOrderSeqNo = oir.sintOrderSeqNo
INNER JOIN TABLEBACKUPS.dbo.tblBAccountParticipant ap
ON o.lngAccountParticipantID = ap.lngParticipantID
LEFT OUTER JOIN TABLEBACKUPS.dbo.tblBOrderPaymentGiftVoucher opgv
ON op.lngPaymentID = opgv.lngPaymentID
LEFT OUTER JOIN TABLEBACKUPS.dbo.tblBGiftVoucher gv
ON opgv.strVoucherNumber = gv.strVoucherNumber
LEFT OUTER JOIN TABLEBACKUPS.dbo.tblBGiftVoucherPromotion gvp
ON gvp.strIssuedBxOrderNo = gv.strIssuedBxOrderNo
WHERE rh.sintReturnStatusId in ( 3 )
AND rh.dtmAdded >= @OrderStartDate
AND rh.dtmAdded query with table variable
> query without table variable




Marcello Miorelli
(17274 rep)
Apr 11, 2016, 05:11 PM
• Last activity: May 23, 2018, 10:51 PM
3
votes
2
answers
70465
views
How to insert into TABLE Variable?
I want to store 2 coordinate points (latitude, longitude) in a table variable. I have tried: declare @coordinates table(latitude1 decimal(12,9), longitude1 decimal(12,9), latitude2 decimal(12,9), longitude2 decimal(12,9)) select latitude, longitude into @coordinates from loc.locations where place_na...
I want to store 2 coordinate points (latitude, longitude) in a table variable.
I have tried:
declare @coordinates table(latitude1 decimal(12,9),
longitude1 decimal(12,9),
latitude2 decimal(12,9),
longitude2 decimal(12,9))
select latitude,
longitude into @coordinates
from loc.locations
where place_name IN ('Delhi', 'Mumbai')
select @coordinates
It's showing error:
> Msg 102, Level 15, State 1, Line 2
> Incorrect syntax near '@coordinates'.
The result of the select query:
select latitude,
longitude
from loc.locations
where place_name IN ('Delhi', 'Mumbai')
is:
latitude longitude
28.666670000 77.216670000
19.014410000 72.847940000
How can I store the values in table datatype?
I ran the query
SELECT @@VERSION
and got the result:
> Microsoft SQL Server 2016 (RTM) - 13.0.1601.5 (X64) Apr 29 2016 23:23:58 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows 10 Enterprise 6.3 (Build 16299: )
Gour Gopal
(255 rep)
Jan 19, 2018, 08:29 AM
• Last activity: Jan 19, 2018, 10:35 AM
0
votes
1
answers
33
views
Retrieving What variables mean when retrieving view definitions?
I'm working on a small script to transfer views over from one database to another. I can get the view definition by using select view_definition from information_schema.views where table_name = 'actor_info', however the result is SELECT a.actor_id, a.first_name, a.last_name, group_concat(DISTINCT ((...
I'm working on a small script to transfer views over from one database to another. I can get the view definition by using select view_definition from information_schema.views where table_name = 'actor_info', however the result is
SELECT a.actor_id,
a.first_name,
a.last_name,
group_concat(DISTINCT (((c.name)::text || ': '::text)
|| ( SELECT group_concat((f.title)::text) AS group_concat
FROM ((film f
JOIN film_category fc_1 ON ((f.film_id = fc_1.film_id)))
JOIN film_actor fa_1 ON ((f.film_id = fa_1.film_id)))
WHERE ((fc_1.category_id = c.category_id) AND (fa_1.actor_id = a.actor_id))
GROUP BY fa_1.actor_id))) AS film_info
FROM (((actor a
LEFT JOIN film_actor fa ON ((a.actor_id = fa.actor_id)))
LEFT JOIN film_category fc ON ((fa.film_id = fc.film_id)))
LEFT JOIN category c ON ((fc.category_id = c.category_id)))
GROUP BY a.actor_id, a.first_name, a.last_name;
I'm looking for a way to figure which tables a and c refer to. Is there a SQL statement that can be used?
daddy_raptor
(1 rep)
Jan 17, 2018, 07:33 PM
• Last activity: Jan 17, 2018, 08:03 PM
3
votes
1
answers
2199
views
Should I use tempdb or memory optimized table variable?
I would like to change some of my stored procs to use memory optimized tables. Before I do this, could you help me to clarify some things? # parallel execution Here I can read, the tempdb doesn’t support it for insert-operation (https://dba.stackexchange.com/questions/16385/whats-the-difference-betw...
I would like to change some of my stored procs to use memory optimized tables.
Before I do this, could you help me to clarify some things?
# parallel execution
Here I can read, the tempdb doesn’t support it for insert-operation (https://dba.stackexchange.com/questions/16385/whats-the-difference-between-a-temp-table-and-table-variable-in-sql-server) .
But what about with data retrieving, like select?
# memory underrun
With memory optimized tables could it be possible, that I get an error, if the server is less of RAM?
In this thread (https://www.brentozar.com/archive/2014/06/temp-tables-table-variables-memory-optimized-table-variables/) I can read about this.
# performance boost
Memory optimized tables do the boost, or not (https://dba.stackexchange.com/questions/16385/whats-the-difference-between-a-temp-table-and-table-variable-in-sql-server) ?
This thread (https://www.brentozar.com/archive/2014/04/memory-ssd-tempdb-temp-table-sql-2014/) describes, that tempdb (starting with SQL 2014) can also reduce disk IO.
Should I use tempdb or memory optimized table variables?
I'm using Microsoft SQL 2014 (12.0.5000.0) Enterprise 64 bit. The SQL queries are a bit complex: e.g. there is a query, that uses 2 views and each of these views depending on other views.
The result of the whole query is not big (a couple thousands of rows).
Some of the tables, from which the views are depending, have 40 - 50k rows.
I alraedy realized, if rewrite a view into a table valued function, it increases the performance. Especially if the original view is embedded in another view, and used more time.
But what about the 3 points above? For me the links are somehow conflicting.
deemon
(33 rep)
Feb 21, 2017, 09:28 AM
• Last activity: Dec 9, 2017, 11:06 PM
2
votes
1
answers
5484
views
temp tables in tempdb are not cleaned up by the system
My question is similar to the one asked here: https://dba.stackexchange.com/questions/103633/sql-server-tempdb-rogue-temporary-tables# Since I don't have enough points to comment and the answer there doesn't address my concern, I can only ask a new question here. Using SQL Search Tool in Visual Stud...
My question is similar to the one asked here: https://dba.stackexchange.com/questions/103633/sql-server-tempdb-rogue-temporary-tables#
Since I don't have enough points to comment and the answer there doesn't address my concern, I can only ask a new question here.
Using SQL Search Tool in Visual Studio 2017 I can see the columns in the table and confirm that the temp table named #BBC835DE is indeed from a table variable, which is related to a stored procedure. I re-run the procedure without any problem, but this table still hangs on. How do I drop tables like this and clean up the tempdb? Thanks.
PS. Based on the result from SQL Search Tool, it seems that not all the tables named like #BBC835DE are from table variables, some are from the cursor declared in stored procedures too.
Weihui Guo
(151 rep)
Aug 3, 2017, 01:11 PM
• Last activity: Aug 3, 2017, 02:10 PM
Showing page 1 of 20 total questions