Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

2 votes
2 answers
246 views
Is sql_variant compatible with First Normal Form (1NF)?
1NF rule says that we should not keep data of different types in one single column. Does it mean that sql_variant is not compatible with first normal form hence should not be used?
1NF rule says that we should not keep data of different types in one single column. Does it mean that sql_variant is not compatible with first normal form hence should not be used?
igelr (2162 rep)
Jul 27, 2018, 03:20 PM • Last activity: Apr 19, 2025, 09:17 PM
8 votes
2 answers
1138 views
Collation conflict when comparing sql_variant with varchar literal
I found that queries below return different results on two Azure Hyperscale databases that are running the same compatibility level, same set options etc. ```sql IF CAST(ServerProperty('Edition') AS nvarchar(128)) = 'SQL Azure' BEGIN SELECT 1 END IF ServerProperty('Edition') = 'SQL Azure' BEGIN SELE...
I found that queries below return different results on two Azure Hyperscale databases that are running the same compatibility level, same set options etc.
IF CAST(ServerProperty('Edition') AS nvarchar(128)) = 'SQL Azure' BEGIN
	SELECT 1
END
IF ServerProperty('Edition') = 'SQL Azure' BEGIN
	SELECT 2
END
IF 'SQL Azure' = ServerProperty('Edition') BEGIN
	SELECT 3
END
On one database, it returns only 1, on other database it returns 1,2 and 3. I investigated the root cause and it seems to be caused by different collations of the databases. For following queries:
SELECT SQL_VARIANT_PROPERTY(ServerProperty('Edition'), 'Collation')
SELECT name, collation_name, compatibility_level FROM sys.databases
The database which returns only one row, the result is:
-----------------------------
SQL_Latin1_General_CP1_CI_AS

name         collation_name                   compatibility_level
------------ -------------------------------- -------------------
master       SQL_Latin1_General_CP1_CI_AS     140
my_database  SQL_Latin1_General_CP850_CI_AS   150
And the result from database that returns 1,2,3 is:
-----------------------------
SQL_Latin1_General_CP1_CI_AS

name         collation_name                   compatibility_level
------------ -------------------------------- -------------------
master       SQL_Latin1_General_CP1_CI_AS     140
my_database  SQL_Latin1_General_CP1_CI_AS     150
So the simple comparison without the cast is comparing sql_variant with varchar (there is no difference when I use N'SQL Azure') where the underlying nvarchar from the sql_variant has in one case different collation than the DB I query and in other case it's matching. First of all, I would assume the comparison of two strings with different collation would fail like it fails when you try to join on two columns with different collation, but it's apparently not the case here. Anyway, what's the best way to safely compare output of a function that might be sql_variant with a varchar?
Lukas.Navratil (355 rep)
Oct 7, 2022, 09:20 AM • Last activity: Oct 8, 2022, 07:30 AM
0 votes
1 answers
2707 views
Passing a variable in OPENJSON WITH statement in SQL Server
Below script calls an API with base and other currency as input and gets foreign exchange(FX) value from its response. While reading the response from api we have to specify '$.rates.GBP' in openjason statement to read value of FX(last statement of below script). I am not able to pass this as a para...
Below script calls an API with base and other currency as input and gets foreign exchange(FX) value from its response. While reading the response from api we have to specify '$.rates.GBP' in openjason statement to read value of FX(last statement of below script). I am not able to pass this as a parameter to make SampleValue dynamic. I tried using dynamic SQL but did not succeed. Please help.
/*
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'Ole Automation Procedures', 1;
GO
RECONFIGURE;
GO

EXEC sp_configure 'Ole Automation Procedures';
GO


*/
DECLARE   @Object	Int
		, @hr		int
		, @Base		char(3)
		, @Symbol   char(3)
		, @Url		varchar(1000)
		,@Param varchar(1000)
DECLARE   @json  as table(Json_Table nvarchar(max))

SET @Base   = 'USD'
SET @Symbol ='GBP'

SET @Url = 'https://api.ratesapi.io/api/latest?base= ' + @Base + '&symbols=' + @Symbol    ;--Your Web Service Url (invoked)
SET @Param = '$.rates.' + @Symbol;

select @Url;  --https://api.ratesapi.io/api/latest?base=USD&symbols=GBP 

Exec @hr=sp_OACreate 'MSXML2.ServerXMLHTTP.6.0', @Object OUT;
IF @hr  0 EXEC sp_OAGetErrorInfo @Object
Exec @hr=sp_OAMethod @Object, 'open', NULL,'get', @Url, 'false'
IF @hr  0 EXEC sp_OAGetErrorInfo @Object
Exec @hr=sp_OAMethod @Object, 'send'
IF @hr  0 EXEC sp_OAGetErrorInfo @Object
Exec @hr=sp_OAMethod @Object, 'responseText', @json OUTPUT
IF @hr  0 EXEC sp_OAGetErrorInfo @Object

INSERT into @json (Json_Table) exec sp_OAGetProperty @Object, 'responseText'

SELECT  *
FROM OPENJSON((select Json_table from @json), N'$')
WITH (   
      SampleValue nvarchar(max) '$.rates.GBP' ,   
	  SampleDate DATE '$.date'
	  ) ;

	/*DECLARE @SQL NVARCHAR(MAX) = 
	'SELECT  *  FROM OPENJSON((select Json_table from @json), N''$'') WITH (           SampleValue  nvarchar(max)  ''$.rates.GBP'',  SampleDate DATE ''$.date'' ) ;'

 select @SQL; 
 EXEC sp_executesql @SQL;

	  DECLARE @SQLstring NVARCHAR(MAX) = 
'SELECT  *
FROM OPENJSON((select Json_table from @json), N''$'') WITH (   
      SampleValue nvarchar(max) '+ ''''+ @param + ''''+ ' ,   
	  SampleDate DATE'  + ''''+ '$.date'  + '''' + ' ) ;'

	  select @SQLstring ; 
EXEC sp_executesql @SQLstring ;*/

EXEC sp_OADestroy @Object
Ruhi Afreen (1 rep)
Oct 15, 2020, 10:56 AM • Last activity: Oct 15, 2020, 06:22 PM
13 votes
2 answers
7456 views
SSIS 2012 Create environment variable fails
I'm working a script to port an Environment from one server to another. I am running into an issue calling `catalog.create_environment_variable` wherein I get the error "The data type of the input value is not compatible with the data type of the 'String'." coming out of the proc "check_data_type_va...
I'm working a script to port an Environment from one server to another. I am running into an issue calling catalog.create_environment_variable wherein I get the error "The data type of the input value is not compatible with the data type of the 'String'." coming out of the proc "check_data_type_value." What's strange is that if I let the GUI script out the variables, that query would work DECLARE @var sql_variant = N'\\myserver\ssisdata' EXEC [catalog].[create_environment_variable] @variable_name = N'FolderBase' , @sensitive = False , @description = N'' , @environment_name = N'Development' , @folder_name = N'POC' , @value = @var , @data_type = N'String' GO However, taking this script approach isn't working. The legwork I've done indicates this error message is usually resolved by using nvarchar data type instead of varchar. However, that's not the case for my stuff. Line 108 for the following script. My assumption is that it's something wonky with the sql_variant but I have no idea *what* that thing is. USE SSISDB; GO DECLARE @folder_id bigint , @folder_name nvarchar(128) = N'POC' , @environment_name nvarchar(128) = N'Development' , @environment_description nvarchar(1024) , @reference_id bigint , @variable_name nvarchar(128) , @data_type nvarchar(128) , @sensitive bit , @value sql_variant , @description nvarchar(1024); IF NOT EXISTS ( SELECT * FROM catalog.folders AS F WHERE F.name = @folder_name ) BEGIN EXECUTE catalog.create_folder @folder_name = @folder_name , @folder_id = @folder_id OUTPUT; PRINT CONCAT('Folder "', @folder_name, '" has been created with a folder_id of ', @folder_id) END IF NOT EXISTS ( SELECT * FROM catalog.environments AS E WHERE E.name = @environment_name AND E.folder_id = (SELECT F.folder_id FROM catalog.folders AS F WHERE F.name = @folder_name) ) BEGIN PRINT CONCAT('Creating environment ', @environment_name); EXECUTE catalog.create_environment @folder_name = @folder_name , @environment_name = @environment_name , @environment_description = @environment_description; END DECLARE @EnvironmentVariables TABLE ( folder_name nvarchar(128) , environment_name nvarchar(128) , variable_name nvarchar(128) , description nvarchar(1024) , data_type nvarchar(128) , sensitive bit , value sql_variant ); INSERT INTO @EnvironmentVariables SELECT E.folder_name , E.environment_name , S.name , S.description , S.type , S.sensitive , S.value FROM ( SELECT 'FolderBase','Root for ssis processing','String',CAST(0 AS bit),'\\myserver\ssisdata' UNION ALL SELECT 'AuditConnectionString','Conn to audit db','String',CAST(0 AS bit),'Data Source=SQLETL01;Initial Catalog=Audit;Provider=SQLNCLI11.1;Integrated Security=SSPI;Auto Translate=False;' ) AS S (name, description, type, sensitive, value) CROSS APPLY ( SELECT E.name AS environment_name , F.name AS folder_name FROM catalog.folders AS F INNER JOIN catalog.environments AS E ON E.folder_id = F.folder_id WHERE F.name = @folder_name AND E.name = @environment_name ) E; DECLARE Csr CURSOR FORWARD_ONLY STATIC FOR SELECT EV.variable_name , EV.description , EV.data_type , EV.sensitive , EV.value FROM @Environmentvariables AS EV; OPEN Csr; FETCH NEXT FROM Csr INTO @variable_name , @description , @data_type , @sensitive , @value; WHILE @@FETCH_STATUS = 0 BEGIN BEGIN TRY -- THERE BE MONSTERS AHEAD -- The data type of the input value is not compatible with the data type of the 'String'. EXECUTE catalog.create_environment_variable @variable_name = @variable_name , @sensitive = @sensitive , @description = @description , @environment_name = @environment_name , @folder_name = @folder_name , @value = @value , @data_type = @data_type END TRY BEGIN CATCH SELECT @folder_name AS folder_name , @environment_name AS environment_name , @variable_name AS variable_name , @data_type AS data_type , @sensitive AS sensitive , @value AS value , @description AS description , ERROR_NUMBER()AS error_number --returns the number of the error. , ERROR_SEVERITY() AS error_severity --returns the severity. , ERROR_STATE()AS error_state --returns the error state number. , ERROR_PROCEDURE() AS error_procedure --returns the name of the stored procedure or trigger where the error occurred. , ERROR_LINE() AS error_line --returns the line number inside the routine that caused the error. , ERROR_MESSAGE() AS error_message; --returns the complete text of the error message. The text includes the values supplied for any substitutable parameters, such as lengths, object names, or times. END CATCH FETCH NEXT FROM Csr INTO @variable_name , @description , @data_type , @sensitive , @value; END CLOSE Csr; DEALLOCATE Csr;
billinkc (16143 rep)
Jul 17, 2013, 09:59 PM • Last activity: Nov 27, 2018, 02:00 AM
4 votes
4 answers
3272 views
PostgreSQL equivalent of SQL Server SQL_VARIANT_PROPERTY
Does PostgreSQL have an equivalent function for determining column metadata? Reference: [SQL_VARIANT_PROPERTY][1] [pg_typeof()][2] appears to be close, but it does not show the precision and scale. Looking up the type in [pg_type][3] only shows the generic type info, not the column specific, e.g. [S...
Does PostgreSQL have an equivalent function for determining column metadata? Reference: SQL_VARIANT_PROPERTY pg_typeof() appears to be close, but it does not show the precision and scale. Looking up the type in pg_type only shows the generic type info, not the column specific, e.g. SQL Fiddle .
孔夫子 (4330 rep)
Nov 4, 2012, 11:52 PM • Last activity: Jul 6, 2018, 06:55 PM
-1 votes
2 answers
1396 views
"Variant" values structure and their use for multiple columns
**2017/07/25 note:** This was a quite complex question, re-reading it 4 years after I posed it I believe it has still some value, but it's sure hard to understand. You probably shouldn't bother if you're not facing a similar problem. If you see a way to enhance it please do edit it. For what is wort...
**2017/07/25 note:** This was a quite complex question, re-reading it 4 years after I posed it I believe it has still some value, but it's sure hard to understand. You probably shouldn't bother if you're not facing a similar problem. If you see a way to enhance it please do edit it. For what is worth I think I ended up using my solution. The project this was part of was eventually aborted, though, so the thing was never tested in real conditions. In retrospect, as far as I can recall, it would have probably been better to use other, simpler, solutions, maybe not using a relational database to store the information; it was one of my first experiences in serious database design. --- ###note to the [2013/04/15 edit](https://dba.stackexchange.com/revisions/39731/4):### The question originally was only about the opportunity of using a single "variant" structure for multiple columns, but all initial comments asked clarifications about the actual structure itself, something I hoped to eventually discuss later, it is probably hard to separate the two matters so now I want to discuss both the best implementation of a "variant values" structure and the convenience to use them for more than one column. I reworked the old question because it would get too messy otherwise. Previous readers do not need to re-read the question, although that may clarify some things. ---

Reasons

Sometimes you have columns that legitimately can have values of any simple type at all the times but your DBMS does not support a variant type. Probably this can occur only for metadata or other exotic uses of databases, let's not discuss about the appropriateness of these things, assume there can be legitimate reasons for them. Just note that I'm meaning to use the system for a limited set of columns for complex functionalities, not as a convenience for OTLT or such things. You can find more information on my present specific case in the comments to this question and to this answer.

Clarifications

I want a column to be able to have *one single* value for each row, and I don't want ways to restrict the types accepted by single rows - the value for every row can legitimately come from any of the types supported by the structure at all the times. There are cases when you want for example another column to determine the type of the valid values, but in this question I'm considering only the unrestricted case. Note: most of the times the information you store cannot truly accept any type, for example if you store metadata about another database where each row is about a column from that database that column has obviously a defined type, but if you don't need the type information for other uses it is not necessary to store a separate "Type" column, unless you assigned different security permissions it would be exactly equivalent to set a value in a "Type" column or to directly choose a value from one of the supported types.

Structure example / proposal

This structure uses a table for the values' IDs (Values) that would be referenced by the column / columns. The various Values_ tables contain the actual values; I here put some tables, if you used the structure for many columns you might put more tables, one advantage of using one structure per column is that you may need less tables for some columns, for example for some you may be sure they will never need fractional numbers of a great precision. Note: if you used a structure for multiple columns you would always only allow types all valid for all of the columns, the minor number of tables in the single structure per table I just mentioned would only be due to including only the *expected* types, but it wouldn't be a catastrophe to set by mistake one of the less-expected types in the single-structure case.
Ok if this is not clear it is not important, don't mind. The referential constraints to Values.Type are there only to ensure that only one actual value can be assigned to each Values.ID. I was not sure about referencing columns that does not constitute a primary key but it seems to be ok, I saw it used in several answers on this site. Values_Null indicates Null "values", irrespective of the type; sometimes you might need this - and you wouldn't care of what type the column containing the Null actually had - we only want to indicate values here; this table would actually normally contain only one or zero rows if you reuse the values - more on this "reusing the values" under "Use" below.

CREATE TABLE ValueTypes (
ID INT PRIMARY KEY, 
Name VARCHAR (30)
)

INSERT INTO ValueTypes (ID, Name) VALUES (1, 'Int')
INSERT INTO ValueTypes (ID, Name) VALUES (2, 'VarChar255')
INSERT INTO ValueTypes (ID, Name) VALUES (3, 'Money')
INSERT INTO ValueTypes (ID, Name) VALUES (4, 'Boolean')
INSERT INTO ValueTypes (ID, Name) VALUES (5, 'Null')

CREATE TABLE Values (
ID INT PRIMARY KEY, 
Type INT NOT NULL,
Notes CHAR LARGE OBJECT, 
FOREIGN KEY (Type) REFERENCES ValueTypes (ID)
)

CREATE TABLE Values_Int (
ValueID INT PRIMARY KEY
Type INT CHECK (Type=1),
Value INT,
FOREIGN KEY (ValueID, Type) REFERENCES Values (ID, Type)
)

CREATE TABLE Values_VarChar255 (
ValueID INT PRIMARY KEY,
Type INT CHECK (Type=2),
Value VARCHAR (255),
FOREIGN KEY (ValueID, Type) REFERENCES Values (ID, Type)
)

CREATE TABLE Values_Money (
ValueID INT PRIMARY KEY
Type INT CHECK (Type=3),
Value DECIMAL (15,4),
FOREIGN KEY (ValueID, Type) REFERENCES Values (ID, Type)
)

CREATE TABLE Values_Boolean (
ValueID INT PRIMARY KEY
Type INT CHECK (Type=4),
Value BOOLEAN, 
FOREIGN KEY (ValueID, Type) REFERENCES Values (ID, Type)
)

CREATE TABLE Values_Null (
ValueID INT PRIMARY KEY, 
Type INT CHECK (Type=5),
FOREIGN KEY (ValueID, Type) REFERENCES Values (ID, Type)
)



Alternative "variant" structures and reasons for my example

The alternative "variant" solutions I can think of are:
  • The most trivial of using a simple textual column for any type. I'm ruling out this because of the high chances of formats mistake, in addition to space waste and poor performance.
  • Putting all the columns for the various different types in a single table. This would mean a lot of Nulls, thus waste of space, but maybe it would be acceptable? Colin 't Hart below advocated this.
  • Making the single Values_ tables reference directly the table where the "variant" column is contained. This would mean of course one set of tables for each of these columns, confusion if more than one column is needed in one table, and most of all impossibility to enforce one single value per row, unless you put also a "Type" column for each of these columns.

Use

I would delete old values when they're not referenced anymore, thus forbid their direct external use in queries, and when needing values already used allow at user will to either reference the existing record or insert a duplicate value (thus in general the values should never change, only be deleted - although maybe a functionality to give a precise meaning to a record, using appropriate additional flag columns, could be useful, though probably confusing).

Questions

  1. What could be the best "variant" structure in general?
  2. Would it be better to use one structure per column or one for all colums (meaning all with the same set of accepted values)?

    The downsides I can see of using one single structure for multiple colums is the need to search among more values and possibly the need for one more table in the middle and thus one more join - although this table can be convenient also when using one structure per column.
    I would like to get ideas about at what point a lot of (indexed) values in a table start to slow their search significantly and make division in more tables significantly convenient.

    The downside I can see of one structure per column is a lot more tables, meaning more confusion - maybe too much.


Thanks

Thank you if you read all this
gab (155 rep)
Apr 11, 2013, 08:29 AM • Last activity: Jul 25, 2017, 01:43 PM
6 votes
1 answers
181 views
Read-ahead reads and SQL-Variant fields
I have two tables containing exactly the same data. Both table have `bigint primary key identity column`, 60 columns and 300 000 rows. The difference is that all of the columns of the second table have `sql-variant` type. I am creating temporary table and importing the data from the two tables in it...
I have two tables containing exactly the same data. Both table have bigint primary key identity column, 60 columns and 300 000 rows. The difference is that all of the columns of the second table have sql-variant type. I am creating temporary table and importing the data from the two tables in it. When data is extracted from the sql-variant columns it is cast to the corresponding SQL type. The data extraction from the first table is performed for 1 sec and from the second table for 6 secs. Basically, the differences in the executions are in the estimations: enter image description here enter image description here and in the read-ahead reads count: enter image description here enter image description here I am wondering why the SQL Server cannot load in advance the data readed from the sql-variant fields (there are almost no read-ahead reads). Also, the storage size of the tables is almost the same: enter image description here Why the SQL Server thinks it should read 67 GB? ---------- The column types are: 16 x BIGINT 8 x DECIMAL(9,2) 36 x NVARCHAR(100) The dbcc dropcleanbuffers command is used each time before data extraction and population. -------- In order to test you can download the sample data files from here . Then, 1. Execute Tech05_01_TableDefinitions.sql 2. Execute 3 times the Tech05_02_TablePupulation.sql 3. Open the Tech05_03_TestingInsertionInTempTable.sql file and excute one time like this: DECLARE @TableStructured SYSNAME = '[dbo].[NormalDataTypes_60Cols]'; DECLARE @TableFromWhichDataIsCoppied SYSNAME = '[dbo].[SQLVariant01_60Cols]'; and one time like this: DECLARE @TableStructured SYSNAME = '[dbo].[NormalDataTypes_60Cols]'; DECLARE @TableFromWhichDataIsCoppied SYSNAME = '[dbo].[NormalDataTypes_60Cols]';
gotqn (4348 rep)
Nov 14, 2016, 12:13 PM • Last activity: Nov 15, 2016, 06:00 AM
1 votes
1 answers
98 views
Report Configuration changes
I created a stored procedure that I got on Erin Stellato's course on Pluralsight to report if there are any configuration changes on the server as a nightly job as follows: CREATEPROCEDURE dbo.usp_SysConfigReport1 ( @RecentDate DATETIME, @OlderDate DATETIME ) AS BEGIN; IF @RecentDate IS NULL OR @Old...
I created a stored procedure that I got on Erin Stellato's course on Pluralsight to report if there are any configuration changes on the server as a nightly job as follows: CREATEPROCEDURE dbo.usp_SysConfigReport1 ( @RecentDate DATETIME, @OlderDate DATETIME ) AS BEGIN; IF @RecentDate IS NULL OR @OlderDate IS NULL BEGIN; RAISERROR('Input parameters cannot be NULL', 16, 1); RETURN; END; SELECT @@SERVERNAME ServerName, CaptureDate, [O].[Name], [O].[Value] AS "OlderValue", [O].[ValueInUse] AS"OlderValueInUse", [R].[Value] AS "RecentValue", [R].[ValueInUse] AS "RecentValueInUse" FROM [dbo].[ConfigData] O JOIN (SELECT [ConfigurationID], [Value], [ValueInUse] FROM [dbo].[ConfigData] WHERE [CaptureDate] = @RecentDate) R on [O].[ConfigurationID] = [R].[ConfigurationID] WHERE [O].[CaptureDate] = @OlderDate AND (([R].[Value] [O].[Value]) OR ([R].[ValueInUse] [O].[ValueInUse])) END -------- --Capture the capture dates into #temp table1 SELECT DISTINCT [CaptureDate] INTO #CapturedDates FROM [dbo].[ConfigData] ORDER BY [CaptureDate] --Create temp table2 CREATE TABLE #CapturedDates2 ( ServerName NVARCHAR(20) ,[CaptureDate] DATETIMEOFFSET ,Name NVARCHAR(1000) ,[OlderValue] INT ,[OlderValueInUse] INT ,[RecentValue] INT ,RecentValueInUse INT ) ------Inserting into Temp2 gin=ving error message as below DECLARE @MINCapturedDate DATETIME DECLARE @MAXCapturedDate DATETIME SELECT @MINCapturedDate =(SELECT CAST(MIN([CaptureDate]) AS DATETIME) FROM #CapturedDates) SELECT @MAXCapturedDate =(SELECT CAST(MAX([CaptureDate]) AS DATETIME) FROM #CapturedDates) INSERT INTO #CapturedDates2 EXEC SYSDBA.dbo.usp_SysConfigReport1 @MINCapturedDate ,@MAXCapturedDate But it gives me the following error message: > Msg 257, Level 16, State 3, Procedure usp_SysConfigReport1, Line 19 > Implicit conversion from data type sql_variant to int is not allowed. > Use the CONVERT function to run this query. EDIT: Include definition of ConfigData CREATE TABLE [dbo].[ConfigData] ( [ConfigurationID] [INT] NOT NULL, [Name] [NVARCHAR](35) NOT NULL, [Value] [SQL_VARIANT] NULL, [ValueInUse] [SQL_VARIANT] NULL, [CaptureDate] [DATETIME] NULL ) ON [PRIMARY]
SQLBen (355 rep)
Oct 1, 2015, 12:25 PM • Last activity: Oct 1, 2015, 06:14 PM
8 votes
2 answers
53373 views
Whats SQL Server NVARCHAR(max) equivalent in MySQL?
Using SQL Server you just have to give the "MAX" parameter to the length of a text data type, but in MySQL there's no such a thing. According to [Ispirer][1]: > "n" is the maximum number of characters, optional > > Range: 1 ⇐ n ⇐ 21845 (65535 bytes is the maximum row size shared among all columns) D...
Using SQL Server you just have to give the "MAX" parameter to the length of a text data type, but in MySQL there's no such a thing. According to Ispirer : > "n" is the maximum number of characters, optional > > Range: 1 ⇐ n ⇐ 21845 (65535 bytes is the maximum row size shared among all columns) Does it mean that: [SQL Server] "NVARCHAR(MAX)" == [MySQL]"NVARCHAR(N)" Or do i have to say NVARCHAR(21845) as NVARCHAR(MAX) in MySQL?
Jonathan Solorzano (241 rep)
Apr 20, 2015, 01:52 AM • Last activity: Apr 20, 2015, 04:55 AM
6 votes
1 answers
5579 views
Why does implicit conversion from sql_variant (basetype decimal) not work well with float
Why is ResultFloat = 0 in the query below? Am I missing a point within the [cast/convert][1], [sql_variant][2] docs? declare @v sql_variant = convert(decimal(28,8), 20.0); select sql_variant_property(@v, 'BaseType') as BaseType, -- 'decimal', iif(convert(int, 10.0) < @v, 1, 0) as ResultInt, -- 1 iif...
Why is ResultFloat = 0 in the query below? Am I missing a point within the cast/convert , sql_variant docs? declare @v sql_variant = convert(decimal(28,8), 20.0); select sql_variant_property(@v, 'BaseType') as BaseType, -- 'decimal', iif(convert(int, 10.0) < @v, 1, 0) as ResultInt, -- 1 iif(convert(decimal, 10.0) < @v, 1, 0) as ResultDecimal, -- 1 iif(convert(float, 10.0) < @v, 1, 0) as ResultFloat, -- 0 ! iif(convert(float, 10.0) < convert(float, @v), 1, 0) as ResultFloatFloat, -- 1 iif(convert(float, 10.0) < convert(decimal(28,8), @v), 1, 0) as ResultFloatDecimal; -- 1 Sql Server 2012
crokusek (2110 rep)
Jan 14, 2014, 02:36 AM • Last activity: Jan 14, 2014, 10:38 AM
Showing page 1 of 10 total questions