Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

5 votes
1 answers
660 views
Reinitialize Table Values in SQL SSDT Unit Testing
I am creating SQL Server Unit Tests. We are testing various stored procedures. In Unit testing principles, it is good practice to setup a small database table, populate values, and tear down (truncate/delete) the database tables, and resetup for each test. This way every unit tests will have a clean...
I am creating SQL Server Unit Tests. We are testing various stored procedures. In Unit testing principles, it is good practice to setup a small database table, populate values, and tear down (truncate/delete) the database tables, and resetup for each test. This way every unit tests will have a clean environment to validate sprocs which insert, select, update, delete, etc, Does anyone where or how to reinitialize the tables values in Sql Unit Testing? Resources are pretty new for unit testing in SQL SSDT VS 2017, so I think lot of people are trying to figure out and understand. Feel free to show or add pictures below. http://www.sqlservercentral.com/articles/Unit+Testing/155651/ http://www.erikhudzik.com/2017/08/23/writing-sql-server-unit-tests-using-visual-studio-nunit-and-sqltest/ Pictures in Visual Studio SSDT: enter image description here Also, trying to review this class in SQLDatabaseSetup.cs: [TestClass()] public class SqlDatabaseSetup { [AssemblyInitialize()] public static void InitializeAssembly(TestContext ctx) { // Setup the test database based on setting in the // configuration file SqlDatabaseTestClass.TestService.DeployDatabaseProject(); SqlDatabaseTestClass.TestService.GenerateData(); } } } using Microsoft.Data.Tools.Schema.Sql.UnitTesting;
user162241
Oct 24, 2018, 05:09 AM • Last activity: Aug 4, 2025, 09:09 AM
0 votes
1 answers
447 views
SSDT db-project compare throws "source schema drift detected" for members of sysadmin
We are using SSDT database projects where changes will be done in the dev-DB, then a compare is done between the devDB and the VS-DBproject and selected changes is updated into the VS-project. Lately (maybe for about a month) the update of the VS-project has started to fail for me with the message "...
We are using SSDT database projects where changes will be done in the dev-DB, then a compare is done between the devDB and the VS-DBproject and selected changes is updated into the VS-project. Lately (maybe for about a month) the update of the VS-project has started to fail for me with the message "Source schema drift detected. Press Compare to refresh the comparison". There is no changes going on in the database at the time, of that I am sure (tried against several databases, also very small ones that no one else is using). There are a couple of strange things that I cannot explain: - We have three environments, dev, test and prod. The problem only happens when going against the dev or test-environments. Against prod it works. All three servers have the same version of SQL Server and Windows Server. - We are several developers, but the problem only occur for some. After further investigation, it seems that those who get the problem are those that have sysadmin permissions on the SQL Server. There is a workaround that is working for me: I can create a DacPac of the source DB and do the compare against that, and then the update will go fine. But this is more cumbersome to do. It is OK as long as it is only the sysadmins with the problem, but if all developers should get this problem...the workaround would be a problem. Any one else have seen this problem? Any suggestions? Our environment: All developers use software installed on a Windows Server 2019: Visual Studio Professional 2019 version 16.11.16 SSDT 16.0.62205.05200 SQL Server is running on a Windows Server 2016. SQL SErver version is: 15.0.4312.2
GHauan (615 rep)
Jul 14, 2023, 01:55 PM • Last activity: Aug 2, 2025, 07:03 PM
2 votes
1 answers
1355 views
Backup from dacpac file extracted using SSDT
I tried method given in this answer by Ramankant Dadhichi but deploying failed: https://dba.stackexchange.com/questions/244167/backup-a-database-from-azure-sql-managed-instance-and-restore-to-on-premise-sql?newreg=ec931412355c4730acfff21d2c7c78cd I have my database in Azure SQL Managed Instance. I e...
I tried method given in this answer by Ramankant Dadhichi but deploying failed: https://dba.stackexchange.com/questions/244167/backup-a-database-from-azure-sql-managed-instance-and-restore-to-on-premise-sql?newreg=ec931412355c4730acfff21d2c7c78cd I have my database in Azure SQL Managed Instance. I extracted a dacpac using SSDT. But, now when I try to deploy the extracted file using SSMS, I get the following error: > Could not deploy package. > Error SQL0: The element [releaseengineer] cannot be deployed. This element contains state that cannot be recreated in the target database. > Error SQL0: The element [Reporter] cannot be deployed. This element contains state that cannot be recreated in the target database. (Microsoft.SqlServer.Dac)
Priya Sharma (21 rep)
Feb 10, 2020, 01:52 AM • Last activity: Aug 2, 2025, 01:04 PM
-1 votes
1 answers
313 views
Why does SQLPackage include my table-valued function in deployments when it hasn't changed?
I'm using SQLPackage in TFS to automate an SSDT project/DACPAC deployment with SQL Server 2014. I have a table-valued function that appears in the deployment report and script with every deployment, even though the source code and compiled rarely change. Example; I can do a new build and deployment...
I'm using SQLPackage in TFS to automate an SSDT project/DACPAC deployment with SQL Server 2014. I have a table-valued function that appears in the deployment report and script with every deployment, even though the source code and compiled rarely change. Example; I can do a new build and deployment with no source code changes, and my deployment report will look like this (the corresponding SQL script will have the definition matching what's already in the DB): I would expect the deployment to have nothing in it. It's the only object for which this happens, in a project containing thousands of objects. Anybody had an experience like this with SQLPackage/TFS/DACPAC before?
John Hurtubise (9 rep)
Apr 30, 2021, 12:55 AM • Last activity: Apr 30, 2025, 02:10 PM
0 votes
0 answers
50 views
DACPAC deployment hangs at Updating database (start)
We are currently in the middle of creating a proof of concept using DACPACs as our deployment/release method rather than our in-house deployment product. The two databases that I am currently using to test are Azure SQLDB databases, and our repo is stored in Azure DevOps. We have been using the pipe...
We are currently in the middle of creating a proof of concept using DACPACs as our deployment/release method rather than our in-house deployment product. The two databases that I am currently using to test are Azure SQLDB databases, and our repo is stored in Azure DevOps. We have been using the pipelines in devops to deploy. The dacpac/sqlproj contains many tables, procedures, etc, and unfortunately must include many very large post deployment scripts. These scripts are used to manage the data in lookup tables we use in the app [a bunch of inserts that are used in a merge for the target table]. I mention this because I suspect that these may be involved, but I'm not sure. When I initially tested this, it did deploy to one of the databases successfully (did not try the other at that point), but this only had one large post-deployment script. I have added the rest in, and now the dacpac publish just does not seem to do anything after getting to "Updating database (start)" when running the release in Devops. There have been a couple of times where the release seemed to be cancelled by Azure (a request to deprovision the agent), and none of the logs are available for the step that hangs. I have also resorted to attempting to publish in VS via SSDT, but that also just hangs for hours. Today, I started trying to use the command-line tool to deploy, and it did actually start to refresh the target DB, but then hung very very early into a procedure refresh. I have tried again multiple times, and every attempt has resulted in the hang at Updating database. sp_Who2 shows a SPID in the database that has a CPU time of ~2000, but it is sleeping and AWATING COMMAND. This seems to never change. The Azure Portal also shows a spike in resources when the dacpac publish started, but then drops to 0 and stays there. I cannot seem to find any further information about this in particular. Below is the cmd I am using, which is pretty much the same, if not exactly the same as what is in the devops pipeline. .\SqlPackage /Action:Publish /SourceFile:"C:\Users\ME\source\repos\REPORNAME\PROJECTNAME\bin\Debug\PROJECTNAME.dacpac" /TargetConnectionString:"Server=tcp:.database.windows.net,1433;Initial Catalog=;Persist Security Info=False;User ID=username;Password=;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;" Does this seem like a resource contention issue? Can anyone point me to some resources or have any idea what might be hanging up, here?
scarr030 (115 rep)
Apr 23, 2025, 03:58 PM
3 votes
1 answers
1364 views
Howto create a detailed report of all actions by publishing with sqlpackage.exe?
When publishing ssdt projects from visual studio I realized that a "deploymentreport.txt" file is created on hard disk. [sqlpackage.exe][1] can be executed with the action "DeployReport" that creates a report of all actions that would have been done during a publishing. Is there any way to create th...
When publishing ssdt projects from visual studio I realized that a "deploymentreport.txt" file is created on hard disk. sqlpackage.exe can be executed with the action "DeployReport" that creates a report of all actions that would have been done during a publishing. Is there any way to create this report DURING the publishing? As a result / log? I don't want to create the report first and call sqlpackage.exe with the publish action again afterwards to get a report and do the deployment. This might be much more work to do than necessary.
Magier (4827 rep)
Feb 25, 2016, 09:28 AM • Last activity: Apr 1, 2025, 07:00 AM
11 votes
3 answers
7612 views
SSDT Drop and Recreate Tables when nothing has changed
We have a Visual Studio Database Project consisting of about 129 Tables. It is the primary database for our Internal Web Based CRM/Call Centre product, which is still under active development. We use the SSDT Publish from within VS to deploy to instances as changes are made. We develop locally via S...
We have a Visual Studio Database Project consisting of about 129 Tables. It is the primary database for our Internal Web Based CRM/Call Centre product, which is still under active development. We use the SSDT Publish from within VS to deploy to instances as changes are made. We develop locally via SQL Express (2016), also have a LAB environment for performance and load tests running SQL 2014, a UAT environment running 2012 and finally a deploy to production which is running SQL 2016. All environments (Except Production) the script generated on publish is very good, only does the changes. The production script does a massive amount more work. Seems to drop and recreate a lot more tables, that I know have not changed (37 tables last deploy). Some of these tables have rows in the millions, and the whole publish is taking upwards of 25mins. If I repeat the publish to production, it again drops and recreates 37 tables. The production DB does have replication which I have to disable before deployments (unsure if that's a factor). I don't understand what the Production publish always wants to drop and recreate tables even though nothing has changed. I'm hoping to get some advice as to where to look to establish why SSDT thinks these need to be re-created. using Visual Studio Professional 2017 V 15.5.5 and SSDT 15.1
OJay (371 rep)
Aug 15, 2018, 11:01 PM • Last activity: Sep 27, 2024, 07:58 AM
13 votes
3 answers
3235 views
Option to uncheck all or invert selection in SSDT?
In SQL Server Data Tools (SSDT) after compared the schema, I want to update few changes of mine only (as red-circled in the screenshot). But there are many other changes are listed in the window, I need to manually uncheck all the other items. Is there any option to uncheck all or invert selection t...
In SQL Server Data Tools (SSDT) after compared the schema, I want to update few changes of mine only (as red-circled in the screenshot). But there are many other changes are listed in the window, I need to manually uncheck all the other items. Is there any option to uncheck all or invert selection the items? SSDT
Arulkumar (1137 rep)
Sep 15, 2017, 11:20 AM • Last activity: Sep 27, 2024, 06:38 AM
2 votes
2 answers
1256 views
Error creating memory optimized filegroup in SSDT
I am attempting to add a memory optimized filegroup to a SQL server database project in SSDT. This is for SQL Server 2017, using Visual Studio 2017. However compiling the project (pressing F5 to build) is resulting in an error. This error does not occur when deploying (via Deploy). The filegroup is...
I am attempting to add a memory optimized filegroup to a SQL server database project in SSDT. This is for SQL Server 2017, using Visual Studio 2017. However compiling the project (pressing F5 to build) is resulting in an error. This error does not occur when deploying (via Deploy). The filegroup is created as normal (with the SQLCMD variable as scripted by SSDT): ALTER DATABASE [$(DatabaseName)] ADD FILEGROUP [MemoryOptimizedFilegroup] CONTAINS MEMORY_OPTIMIZED_DATA However this results in an error: > The operation 'AUTO_CLOSE' is not supported with databases that have a MEMORY_OPTIMIZED_DATA filegroup. However, database settings show that auto close is indeed disabled: Auto Close The deployment script generated by SSDT for some reason is creating the database, creating the filegroup, and then only setting AUTO_CLOSE off afterward. It is possible this is resulting in the error: CREATE DATABASE [$(DatabaseName)] ON PRIMARY(NAME = [$(DatabaseName)], FILENAME = N'$(DefaultDataPath)$(DefaultFilePrefix)_Primary.mdf') LOG ON (NAME = [$(DatabaseName)_log], FILENAME = N'$(DefaultLogPath)$(DefaultFilePrefix)_Primary.ldf') COLLATE SQL_Latin1_General_CP1_CI_AS GO PRINT N'Creating [MemoryOptimizedFilegroup]...'; GO ALTER DATABASE [$(DatabaseName)] ADD FILEGROUP [MemoryOptimizedFilegroup] CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE [$(DatabaseName)] ADD FILE (NAME = [MemoryOptimizedFilegroup_69323650], FILENAME = N'$(DefaultDataPath)$(DefaultFilePrefix)_MemoryOptimizedFilegroup_69323650.mdf') TO FILEGROUP [MemoryOptimizedFilegroup]; GO USE [$(DatabaseName)]; GO IF EXISTS (SELECT 1 FROM [master].[dbo].[sysdatabases] WHERE [name] = N'$(DatabaseName)') BEGIN ALTER DATABASE [$(DatabaseName)] SET ANSI_NULLS ON, ANSI_PADDING ON, ANSI_WARNINGS ON, ARITHABORT ON, CONCAT_NULL_YIELDS_NULL ON, NUMERIC_ROUNDABORT OFF, QUOTED_IDENTIFIER ON, ANSI_NULL_DEFAULT ON, CURSOR_DEFAULT LOCAL, CURSOR_CLOSE_ON_COMMIT OFF, AUTO_CREATE_STATISTICS ON, AUTO_SHRINK OFF, AUTO_UPDATE_STATISTICS ON, RECURSIVE_TRIGGERS OFF WITH ROLLBACK IMMEDIATE; ALTER DATABASE [$(DatabaseName)] SET AUTO_CLOSE OFF WITH ROLLBACK IMMEDIATE; END I am not certain whether AUTO_CLOSE is enabled or disabled by default after CREATE DATABASE. If ON by default, it seems that SSDT is generating the deployment script in the wrong order. If OFF by default, then I don't understand why the error exists. Has anyone had success creating a memory optimized file group in an SSDT project?
Definite (145 rep)
Jan 23, 2020, 09:18 AM • Last activity: Jul 29, 2024, 10:38 AM
0 votes
1 answers
57 views
Why is SQLPackage suddenly including every scalar function in my SSDT project for ALTER in my deployment?
I have a SQL 2019 SSDT project with a large number of scalar and table-valued functions. Suddenly yesterday, my Windows TFS (on prem) deployment project's SQLPackage scripting step is suddenly including every scalar function in the project (~300) in the deployment script (with ALTER). I spot-checked...
I have a SQL 2019 SSDT project with a large number of scalar and table-valued functions. Suddenly yesterday, my Windows TFS (on prem) deployment project's SQLPackage scripting step is suddenly including every scalar function in the project (~300) in the deployment script (with ALTER). I spot-checked the first dozen - in every one, the current compiled definition in the target database is identical to the definition SQLPackage intends to execute. I checked the git repo - I don't see any commits that change the project's sqlproj file. Anyone seen anything like this before?
John Hurtubise (9 rep)
Jun 10, 2024, 07:18 PM • Last activity: Jun 14, 2024, 12:12 PM
19 votes
1 answers
10292 views
SQL Server - Adding non-nullable column to existing table - SSDT Publishing
Due to business logic, we need a new column in a table that is critical to ensure is always populated. Therefore it should be added to the table as `NOT NULL`. Unlike [previous questions][1] that explain how to do this *manually*, this needs to be managed by the SSDT publish. I have been banging my...
Due to business logic, we need a new column in a table that is critical to ensure is always populated. Therefore it should be added to the table as NOT NULL. Unlike previous questions that explain how to do this *manually*, this needs to be managed by the SSDT publish. I have been banging my head against the wall for a while over this simple-sounding task due to some realizations: 1. A default value is not appropriate, and it cannot be a computed column. Perhaps it is a foreign key column, but for others we cannot use a fake value like 0 or -1 because those values might have significance (e.g. numeric data). 2. Adding the column in a pre-deployment script will fail the publish when it automatically tries to create the same column, a second time (even if the pre-deployment script is written to be idempotent) (this one is really aggravating as I can otherwise think of an easy solution) 3. Altering the column to NOT NULL in a post-deployment script will be reverted each time the SSDT schema refresh occurs (so at the very least our codebase will mismatch between source control and what is actually on the server) 4. Adding the column as nullable now with the intention of changing to NOT NULL in the future does not work across multiple branches/forks in source control, as the target systems will not necessarily all have the table in the same state next time they are upgraded (not that this is a good approach anyway IMO) The approach I have heard from others is to directly update the table definition (so the schema refresh is consistent), write a predeployment script that *moves* the entire contents of the table to a temporary table with the new column population logic included, then to move the rows back in a postdeployment script. This seems risky as all hell though, and still pisses off the Publish Preview when it detects a NOT NULL column is being added to a table with existing data (since that validation runs before the predeployment scripting). How should I go about adding a new, non-nullable column without risking orphaned data, or moving data back and forth on every publish with lengthy migration scripts? Thanks.
Elaskanator (761 rep)
Jun 1, 2018, 08:07 PM • Last activity: Apr 19, 2024, 11:35 AM
13 votes
4 answers
7850 views
How do I get SSMS to use the relative path of the current script with :r in sqlcmd mode like SSDT does?
If I have foo.sql and bar.sql in the same folder, foo.sql can reference bar.sql when run from SSDT in sqlcmd mode with `:r ".\bar.sql"`. However, SSMS won't find it. Procmon shows SSMS is looking in `%systemroot%\syswow64`: ![Annotated Procmon screenshot][1] How do I tell SSMS to look in the folder...
If I have foo.sql and bar.sql in the same folder, foo.sql can reference bar.sql when run from SSDT in sqlcmd mode with :r ".\bar.sql". However, SSMS won't find it. Procmon shows SSMS is looking in %systemroot%\syswow64: Annotated Procmon screenshot How do I tell SSMS to look in the folder that the current script is saved to without explicitly declaring the path?
Justin Dearing (2727 rep)
Feb 6, 2015, 03:24 PM • Last activity: Apr 12, 2024, 09:47 PM
0 votes
0 answers
437 views
SSDT building error saying "unresolved reference to Login"
We retrieved source code from a SQL Server 2016 database and made a Visual Studio 2015 SSDT (SQL Server Data Tool) project. The source code contains a file `Security\foglight5.sql` with the below content: ```sql CREATE USER [foglight5] FOR LOGIN [foglight5]; ``` We want to deploy the source code on...
We retrieved source code from a SQL Server 2016 database and made a Visual Studio 2015 SSDT (SQL Server Data Tool) project. The source code contains a file Security\foglight5.sql with the below content:
CREATE USER [foglight5] FOR LOGIN [foglight5];
We want to deploy the source code on a testing instance of SQL Server, so that we have a development environment with the same DDL as the server. However, when trying to publish the source code to a testing instance, Visual Studio requires building first. And, when building the project, we got an error saying Error SQL71501: User: [foglight5] has an unresolved reference to Login [foglight5].. **Our Question:** How to resolve the above error and proceed with our testing deployment?
James (149 rep)
Oct 25, 2023, 11:31 PM • Last activity: Apr 2, 2024, 05:30 PM
1 votes
0 answers
3674 views
SSIS and Script Component - WebReference problem
Within my SSIS package-dataflow I have a script component and within this component whenever I add a Web reference and close editor, script component gives me > "Validation error. Data Flow Task Script Component [1]: The binary code for the script is not found. Please open the script in the designer...
Within my SSIS package-dataflow I have a script component and within this component whenever I add a Web reference and close editor, script component gives me > "Validation error. Data Flow Task Script Component : The binary code for the script is not found. Please open the script in the designer by clicking Edit Script button and make sure it builds successfully." I can successfully build from the editor, but error resides there. How can I solve this problem. Ps: I am using SSDT for VS2012.
bsaglamtimur (11 rep)
Mar 26, 2014, 12:32 PM • Last activity: Mar 29, 2024, 07:40 AM
0 votes
1 answers
66 views
SSDT Project Will Not Publish or Generate Preview When Source and Target Partition Function Not Matched
In a SSDT project there is a partition function defined in a file as follows. CREATE PARTITION FUNCTION [pfSourceID](varchar(50)) AS RANGE RIGHT FOR VALUES ( '', 'COKE', 'DRPEPPER', 'MOUNTAINDEW', 'PEPSI' ) GO There is a scheme to go along defined in another file: CREATE PARTITION SCHEME psSourceID...
In a SSDT project there is a partition function defined in a file as follows. CREATE PARTITION FUNCTION [pfSourceID](varchar(50)) AS RANGE RIGHT FOR VALUES ( '', 'COKE', 'DRPEPPER', 'MOUNTAINDEW', 'PEPSI' ) GO There is a scheme to go along defined in another file: CREATE PARTITION SCHEME psSourceID AS PARTITION pfSourceID ALL TO ([PRIMARY]); Tables are defined in files using the following construct: CREATE TABLE [dbo].[MyTable] ( [MyTableID] INT NOT NULL, [SourceID] VARCHAR(50) NOT NULL CONSTRAINT [PK_MyTable] PRIMARY KEY NONCLUSTERED ([MyTableID] ASC, [SourceID] ASC) ON psSourceID(SourceID), CONSTRAINT [CIX_MyTableByPartition] UNIQUE CLUSTERED ([SourceID] ASC, [MyTableID] ASC) ON psSourceID(SourceID) ) ON psSourceID(SourceID); All is well and this works out of the gate when no partitions have been defined on the target. The problem occurs once the target and source partition functions don't match. (there are partitions at the target that do not match the SSDT project definition such as adding a new partition in ssdt). When the partition functions are un-matched, the deployment pipeline just spins and inside of Visual Studio, the "Generate Preview Script" also goes into a spin. Maybe there is a need to determine data motion, however, all sql traces indicate very little SQL activity by the account running the deployment. While monitoring the deployment, there were a handful of very quick queries by the account running the publish and "DxSomething" app, then silence until the client process was halted non-gracefully and manually after ~2 hours. There are two ways to get around the issue. 1. Make sure the PARTITION FUNTION in the SSDT project matches exactly what is at the target. 2. (and/or) Check "Ignore partition schemes." From what was researched, when all data is aligned to an existing partition, using a SPLIT to add a new "BRAND" on the scheme should be relatively fast. The hope was to control this via SSDT, however, if 1 or 2 above are utilized then it would not make sense to use the SSDT :/ UPDATE: Possible solution would be to split in a "PostScript" IF NOT EXISTS (select * from sys.partition_range_values where value='NEWDRINK') BEGIN ALTER PARTITION SCHEME psSourceID NEXT USED [PRIMARY]; ALTER PARTITION FUNCTION pfSourceID() SPLIT RANGE ('NEWDRINK'); END Then during the next release, the function defined in SSDT would be shored up with the latest. This will be tried tomorrow but still seems backwards or off a bit.
Ross Bush (683 rep)
Feb 29, 2024, 09:15 PM • Last activity: Mar 1, 2024, 03:00 PM
4 votes
2 answers
4744 views
Exclude certain schema along with unnamed constraints in SSDT
### Task 1. Automate database deployment (SSDT/dacpac deployment with CI/CD) 2. The database is a 3rd party database 3. It also includes our own customized tables/SP/Fn/Views in separate schemas 4. Should exclude 3rd party objects while deploying the database project(dacpac) to Production 5. Thanks...
### Task 1. Automate database deployment (SSDT/dacpac deployment with CI/CD) 2. The database is a 3rd party database 3. It also includes our own customized tables/SP/Fn/Views in separate schemas 4. Should exclude 3rd party objects while deploying the database project(dacpac) to Production 5. Thanks to Ed Elliott for the AgileSqlClub.DeploymentFilterContributor . Used the dll to filter out the schema successfully. ### Problem 1. The 3rd party schema objects(Tables) are defined with unnamed constraints(default / primary key) when creating the tables. Example:
CREATE TABLE [3rdParty].[MainTable] 
    (ID INT IDENTITY(1,1) NOT NULL,
    CreateDate DATETIME DEFAULT(GETDATE()))  --There is no name given to default constraint
2. When I generate the script for deployment using sqlpackage.exe, I see following statements in the generated script. Generated the script using: >"C:\Program Files\Microsoft SQL Server\150\DAC\bin\sqlpackage.exe" /action:script /sourcefile:C:\Users\User123\source\repos\DBProject\DBProject\bin\Debug\DBProject.dacpac /TargetConnectionString:"Data Source=MyServer; Initial Catalog=MSSQLDatabase; Trusted_Connection=True" /p:AdditionalDeploymentContributorPaths="C:\Program Files\Microsoft SQL Server\150\DAC\bin\AgileSqlClub.SqlPackageFilter.dll" /p:AdditionalDeploymentContributors=AgileSqlClub.DeploymentFilterContributor /p:AdditionalDeploymentContributorArguments="SqlPackageFilter=IgnoreSchema(3rdParty)" /outputpath:"c:\temp\script_AfterDLL.sql" Script Output:
/*
    Deployment script for MyDatabase
    
    This code was generated by a tool.
    Changes to this file may cause incorrect behavior and will be lost if
    the code is regenerated.
    */
    ...
    ...
    GO
    PRINT N'Dropping unnamed constraint on [3rdParty].[MainTable]...';
        
    
    GO
    ALTER TABLE [3rdParty].[MainTable] DROP CONSTRAINT [DF__MainTabl__Crea__59463169];
    
    ...
    ...
    ...(towards the end of the script)
    ALTER TABLE [3rdParty].[MainTable_2] WITH CHECK CHECK CONSTRAINT [fk_518_t_44_t_9];
3. I cannot alter 3rd party schema due to company restrictions 4. There are many lines of **unnamed constraint** and WITH CHECK CHECK constraints generated in the script. ### Questions 1. How can I be able to remove the lines to **DROP unnamed Constraint** on 3rd party schemas? - Even though the dll excludes 3rd party schema, it still has these unnamed constraints scripted/deployed. Also, it is not adding them back too. 2. How can I be able to skip/remove generating WITH CHECK CHECK CONSTRAINT on 3rd party schemas Also, I found another issue. The deployment will not succeed due to: > Rows were detected. The schema update is terminating because data loss might occur ### Output
/*
The column [3rdParty].[MainTable_1].[Col1] is being dropped, data loss could occur.

The column [3rdParty].[MainTable_1].[Col2] is being dropped, data loss could occur.

The column [3rdParty].[MainTable_1].[Col3] is being dropped, data loss could occur.

The column [3rdParty].[MainTable_1].[Col4] is being dropped, data loss could occur.
*/

IF EXISTS (select top 1 1 from [3rdParty].[MainTable_1])
    RAISERROR (N'Rows were detected. The schema update is terminating because data loss might occur.', 16, 127) WITH NOWAIT

GO
I tried various combination of parameters with no luck. /p:ExcludeObjectType=Defaults or /p:DropObjectsNotInSource=False /p:DoNotDropObjectType=Defaults etc.
Santhoshkumar KB (581 rep)
Jun 8, 2020, 06:53 AM • Last activity: Dec 5, 2023, 03:07 PM
1 votes
1 answers
323 views
Installing SQL Server Tools
In order to set up an SSAS system, I am having to install Visual Studio (SQL Server Data Tools - Business Intelligence) but when I adding this to the instance, it says it has failed the rule "Same Architecture installation", and that I have to match the new installation CPU architecture with the CPU...
In order to set up an SSAS system, I am having to install Visual Studio (SQL Server Data Tools - Business Intelligence) but when I adding this to the instance, it says it has failed the rule "Same Architecture installation", and that I have to match the new installation CPU architecture with the CPU architecture with the instance. The instance I'm trying to link to is Microsoft SQL Server 2014 - 12.0.2269.0 (X64) Developer Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1) running on Windows 7. The SSDT installation is Visual Studio 2013. I'm not sure how I can achieve this.
LOUPIN.LUPINS (41 rep)
Nov 4, 2015, 11:41 PM • Last activity: Jul 22, 2023, 02:07 PM
2 votes
2 answers
2684 views
DACPAC drops existing users and permissions from database when upgrading through SSMS
I'm developing my database through a database project in visual studio, and whenever I need to upgrade the schema of my database I like to use SSMS to do this using a DACPAC. But whenever I use my DACPAC to upgrade the schema of my database, then the current existing Security Users and permissions f...
I'm developing my database through a database project in visual studio, and whenever I need to upgrade the schema of my database I like to use SSMS to do this using a DACPAC. But whenever I use my DACPAC to upgrade the schema of my database, then the current existing Security Users and permissions for these users are being dropped. And I need them to remain in the database. Is there anyway to configure the DACPAC to not drop users and their permissions whenever I upgrade my database using "Upgrade Data-tier Application" in SSMS? Thanks in advance.
Emil Skovmand (21 rep)
Aug 17, 2022, 09:25 AM • Last activity: Jul 3, 2023, 11:58 AM
3 votes
1 answers
2171 views
How Can I Ignore Partition Schemes in my VS SSDT Database Project
We use SSDT in VS 2015 to manage our database project code. Recently, we've been implementing partitioning on some tables in databases that are managed with SSDT. I need to include the partition function and partition scheme definition in my project if I want to declare a dependency on that scheme i...
We use SSDT in VS 2015 to manage our database project code. Recently, we've been implementing partitioning on some tables in databases that are managed with SSDT. I need to include the partition function and partition scheme definition in my project if I want to declare a dependency on that scheme in my table definitions. I'd prefer to ignore changes to the scheme and function so we can manage that outside of the project. **The problem** I am having is that SSDT's publish profile appears to not be honoring my partition scheme exclusion settings even though they appear to be set correctly. When I go to Advanced options in the publish profile settings and click on the Ignore tab, I have "Ignore partition schemes" and "Ignore table options" checked in the top box as well as "Exclude partition schemes" and "Exclude partition functions" checked in the bottom box. Unchecking the option for partition functions appears to be behaving properly. When checked, the function disappears from the generated deployment script. However, regardless of which options I check and uncheck in regards to the partition scheme the partition scheme still appears in the publish script. This is a screenshot of my settings: screenshot I recently upgraded from SSDT v14.0.61707.300 to v14.0.61712.050 but the behavior still exists (the latest release 17.4 on the MS site) https://learn.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt Can anyone explain what I'm doing wrong or is this a confirmed bug? Is the partition scheme forced to exist in the publish script under certain circumstances that my project may be affected by? I'd rather not have to completely remove all partitioning definitions from the project in order to work around this single issue as it will obscure the definitions.
Alf47 (981 rep)
Dec 22, 2017, 08:32 PM • Last activity: May 24, 2023, 08:07 AM
11 votes
1 answers
782 views
SSDT Schema Compare doesn't work while a BULK INSERT is in progress
I'm working at a large ETL and DW project where we use TFS/source control together with both SSIS and SSDT. Today, I found out that while an SSIS package is performing a BULK INSERT into a database table, it is not possible to perform an SSDT Schema Compare against that database. This is unfortunate...
I'm working at a large ETL and DW project where we use TFS/source control together with both SSIS and SSDT. Today, I found out that while an SSIS package is performing a BULK INSERT into a database table, it is not possible to perform an SSDT Schema Compare against that database. This is unfortunate, as some of our packages take quite a long time to complete. We want to use the Schema Compare function to detect changes to the database structure in order to save them in our SSDT project for version control of the database. Looking a little more into this, I found that the Schema Compare function in SSDT executes an SQL script that calls the OBJECTPROPERTY() system function on the tables in the database. Specifically in my case, any calls to OBJECTPROPERTY(, N'IsEncrypted') seems to be blocked, when `` refers to a table that is currently being bulk inserted. In Visual Studio, the SSDT Schema Compare simply times out after a while and claims that no differences have been detected. Is there a workaround to this issue in SSDT, or should I perhaps try to file a MS Connect bug report? Alternatively, since the BULK INSERT happens from an SSIS package, is there perhaps some way to make this insertion without locking OBJECTPROPERTY-calls on the table? **Edit:** In SSIS OLE DB Destinations, we can remove the check mark from "Lock Table", which does what it says, but this might hurt performance in some situations. I am much more interested in a solution that allows the SSDT Schema Compare to do its job, even if some objects are locked.
Dan (397 rep)
Apr 28, 2015, 08:40 AM • Last activity: Apr 3, 2023, 11:03 AM
Showing page 1 of 20 total questions