Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
5
votes
1
answers
1253
views
Handling SQL updates for production deployments
I'm researching for quite a long time now (about two months) and haven't found a good approach for this, so I'll ask you experts out there in the hope of some enlightenment. I run a pretty traditional LAMP web application in the cloud. The source code is VCS'ed with Git, as well as database schema (...
I'm researching for quite a long time now (about two months) and haven't found a good approach for this, so I'll ask you experts out there in the hope of some enlightenment.
I run a pretty traditional LAMP web application in the cloud. The source code is VCS'ed with Git, as well as database schema (split SQL files for structure and default data).
Deploying a new environment is quite a joy, development or production. Dev environments are deployed with Vagrant plus some nice shell provisioning script I wrote. Production environments are deployed using a subset of the Vagrant provisioning script, which is also very easy to do.
When the development cycle imposes a change to the SQL schema (structure or data), things starts to break. In that case, developers changes the core SQL scripts so all the changes gets into the VCS, which is totally fine for brand new environments.
The problem sits within already deployed production environments: for those, I have to manually get all the patches for the SQL changes since the revision actually running in all production servers first so I can apply them manually after. This takes a long time to happen, is a quite fragile task and error prone.
Initially, I thought I can move the SQL changes away from the core SQL files smaller subsets, letting the core SQL files as a starting point. When there is something to change at SQL structure, I could just tell devs to create a new SQL file with only the changes from that development cycle.
So, for instance:
- structure_begin.sql: the SQL schema as it is now.
|- structure_devcycle1.sql: first set of changes to the structure
|- structure_devcycle2.sql: second set of changes to the structure, and so forth...
Then, by using Git hooks, I can selectively and automatically deploy them on production. But I don't know if this is a good approach. And here I can ask:
1. Does somebody out there have solved this puzzle?
2. What are the best practices for release management and deploy automation for SQL changes in production environments?
3. Are there any open source tools that can help in this process? (I'm aware of Sqitch, but I'm not quite sure if it is the right one for the job.)
denydias
(151 rep)
Nov 14, 2014, 11:29 PM
• Last activity: Jun 24, 2025, 07:44 AM
3
votes
1
answers
203
views
deploy only a new schema as a separate database project without affecting existing production DB code
We have a production database (SQL Server 2019) with existing code (tables, procedures, functions, synonyms) that was deployed without a source repository. Our client wants to introduce a new schema with new code, but keep all the old code and data intact. We plan to deploy the database changes via...
We have a production database (SQL Server 2019) with existing code (tables, procedures, functions, synonyms) that was deployed without a source repository. Our client wants to introduce a new schema with new code, but keep all the old code and data intact.
We plan to deploy the database changes via Azure DevOps pipeline - SQL Database Project in VS (new approach for all new changes and projects).
How do we create and deploy a database project solution that contains only the new schema and its related code, without including or modifying the existing database objects?
We want to avoid touching the old schema/code to prevent any risk or downtime. The new schema should be deployed independently, coexisting with the old schema and code.
What tools handle such partial database deployments?
How do we manage this in source control and CI/CD pipelines, especially considering the old code is not in a repo?
adam.g
(465 rep)
Jun 9, 2025, 06:28 PM
• Last activity: Jun 17, 2025, 08:47 AM
1
votes
1
answers
223
views
What's the best way to deploy changes to several databases on multiple Postgresql Databases servers
I was wondering if there is a better way to deploy changes to several Postgres databases on multiple servers. I have been using Ansible for this task, I have a playbook that basically copies the changes to the server, applies them to the databases, and collect the logs using `psql`: ``` psql -v ON_E...
I was wondering if there is a better way to deploy changes to several Postgres databases on multiple servers. I have been using Ansible for this task, I have a playbook that basically copies the changes to the server, applies them to the databases, and collect the logs using
psql
:
psql -v ON_ERROR_STOP=1 -f my_changes.sql > log.out 2> log.err
But I have some problems with this approach with the INDEX
creations. Sometimes it takes several minutes or hours to create the index and I need to use background jobs with ansible using [async & poll](https://docs.ansible.com/ansible/latest/user_guide/playbooks_async.html) to allow Ansible run long tasks wihtout close the ssh connection. Also, I think I am using Ansible wrong since it is a configuration tool and this kind of task always reflect a change on the server...
Hoping you guys can tell me better approaches, tools( I heard that [pgbarman](https://pgbarman.org/) can also deploy changes) or improvements on my current solution
Javier Galarza
(81 rep)
Oct 30, 2021, 11:57 AM
• Last activity: Jun 14, 2025, 10:04 AM
0
votes
0
answers
50
views
DACPAC deployment hangs at Updating database (start)
We are currently in the middle of creating a proof of concept using DACPACs as our deployment/release method rather than our in-house deployment product. The two databases that I am currently using to test are Azure SQLDB databases, and our repo is stored in Azure DevOps. We have been using the pipe...
We are currently in the middle of creating a proof of concept using DACPACs as our deployment/release method rather than our in-house deployment product. The two databases that I am currently using to test are Azure SQLDB databases, and our repo is stored in Azure DevOps. We have been using the pipelines in devops to deploy.
The dacpac/sqlproj contains many tables, procedures, etc, and unfortunately must include many very large post deployment scripts. These scripts are used to manage the data in lookup tables we use in the app [a bunch of inserts that are used in a merge for the target table]. I mention this because I suspect that these may be involved, but I'm not sure.
When I initially tested this, it did deploy to one of the databases successfully (did not try the other at that point), but this only had one large post-deployment script. I have added the rest in, and now the dacpac publish just does not seem to do anything after getting to "Updating database (start)" when running the release in Devops. There have been a couple of times where the release seemed to be cancelled by Azure (a request to deprovision the agent), and none of the logs are available for the step that hangs.
I have also resorted to attempting to publish in VS via SSDT, but that also just hangs for hours. Today, I started trying to use the command-line tool to deploy, and it did actually start to refresh the target DB, but then hung very very early into a procedure refresh. I have tried again multiple times, and every attempt has resulted in the hang at Updating database.
sp_Who2 shows a SPID in the database that has a CPU time of ~2000, but it is sleeping and AWATING COMMAND. This seems to never change. The Azure Portal also shows a spike in resources when the dacpac publish started, but then drops to 0 and stays there. I cannot seem to find any further information about this in particular.
Below is the cmd I am using, which is pretty much the same, if not exactly the same as what is in the devops pipeline.
.\SqlPackage /Action:Publish /SourceFile:"C:\Users\ME\source\repos\REPORNAME\PROJECTNAME\bin\Debug\PROJECTNAME.dacpac" /TargetConnectionString:"Server=tcp:.database.windows.net,1433;Initial Catalog=;Persist Security Info=False;User ID=username;Password=;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
Does this seem like a resource contention issue? Can anyone point me to some resources or have any idea what might be hanging up, here?
scarr030
(115 rep)
Apr 23, 2025, 03:58 PM
0
votes
1
answers
685
views
Error while deploying package to catalog
I am getting below error while deploying package to SSIS catalog on the last step, I am sysadmin and have access to the below views. Can someone help please. Let me know if you need more info. Any help is appreciated > The SELECT permission was denied on the object 'database_principals', > database...
I am getting below error while deploying package to SSIS catalog on the last step, I am sysadmin and have access to the below views. Can someone help please. Let me know if you need more info. Any help is appreciated
> The SELECT permission was denied on the object 'database_principals',
> database 'mssqlsystemresource', schema 'sys'. (Microsoft SQL Server,
> Error: 229)
Finny Thomas
(1 rep)
Dec 13, 2019, 03:36 PM
• Last activity: Apr 6, 2025, 08:00 PM
3
votes
1
answers
1364
views
Howto create a detailed report of all actions by publishing with sqlpackage.exe?
When publishing ssdt projects from visual studio I realized that a "deploymentreport.txt" file is created on hard disk. [sqlpackage.exe][1] can be executed with the action "DeployReport" that creates a report of all actions that would have been done during a publishing. Is there any way to create th...
When publishing ssdt projects from visual studio I realized that a "deploymentreport.txt" file is created on hard disk.
sqlpackage.exe can be executed with the action "DeployReport" that creates a report of all actions that would have been done during a publishing.
Is there any way to create this report DURING the publishing? As a result / log?
I don't want to create the report first and call sqlpackage.exe with the publish action again afterwards to get a report and do the deployment. This might be much more work to do than necessary.
Magier
(4827 rep)
Feb 25, 2016, 09:28 AM
• Last activity: Apr 1, 2025, 07:00 AM
5
votes
1
answers
506
views
Are there best practices for SQL Server blue/green deployment?
Is it possible to spin up the new instance with the same structure and replicate all the data there, then perform some long running blocking operations that actually do not perform incompatible changes (for example change `bigint` to `int` or `nvarchar` to `varchar` if the types were chosen incorrec...
Is it possible to spin up the new instance with the same structure and replicate all the data there, then perform some long running blocking operations that actually do not perform incompatible changes (for example change
bigint
to int
or nvarchar
to varchar
if the types were chosen incorrectly), sync the data and then perform the connection switch in the application?
There will always be writes to the main database, so all the data will need to be synced to the new instance. And there could be some downtime, but not very big, let's say ~10 minutes. What approach could be used for that?
The edition of SQL Server is Enterprise.
Our main use case would be to change the data type on the PK of the very large tables.
There are hundreds of millions of records in several pretty wide tables where this change should be made. The expected downtime is a way more than 10 minutes, it is hours and hours and we can't afford such downtime.
Dmitrij Kultasev
(191 rep)
Dec 6, 2024, 06:45 AM
• Last activity: Dec 9, 2024, 10:03 PM
0
votes
0
answers
7
views
Can the password for a user be customized during the initialization of a PXD deployment?
Could the password for the polardbx_root@'%' user be customized during the initialization of a PXD deployment in PolarDB? I attempted to set the password using the following command: SET PASSWORD FOR polardbx_root@'%' = PASSWORD('Deepwise123'); However, I encountered the following error: ERROR 5107...
Could the password for the polardbx_root@'%' user be customized during the initialization of a PXD deployment in PolarDB?
I attempted to set the password using the following command:
SET PASSWORD FOR polardbx_root@'%' = PASSWORD('Deepwise123');
However, I encountered the following error:
ERROR 5107 (HY000): [18da9f3485402000][172.16.40.24:52684][polardbx]ERR-CODE: [PXC-5107][ERR_OPERATION_NOT_ALLOWED] Can not modify polardbx_root since it is reserved for system
Is there a way to customize this password, or is it strictly reserved for system use?

user24266786
(1 rep)
Nov 15, 2024, 03:19 AM
5
votes
2
answers
2951
views
DbUp and script directory
I'm using [DbUp][1], starting from the example I've created a C# program. The example suggest me to put all scripts under *Scripts* directory. Ok, this is done but, when script number will grow up that directory will be such a big mess. I would organize that directories with version number. For inst...
I'm using DbUp , starting from the example I've created a C# program. The example suggest me to put all scripts under *Scripts* directory. Ok, this is done but, when script number will grow up that directory will be such a big mess.
I would organize that directories with version number. For instance
- 1.0.0 (directory)
- CS001 - baseline.sql
- 1.0.1 (directory)
- CS001 - alter x table.sql
- CS002 - create y procedure.sql
- 1.1.0 (directory)
- CS001 - create view z.sql
Obviously, code would take the script in an ordered manner. How can I do that? Where I found documentation and other examples?
BAD_SEED
(173 rep)
Nov 12, 2013, 02:19 PM
• Last activity: Nov 5, 2024, 05:47 PM
11
votes
3
answers
7612
views
SSDT Drop and Recreate Tables when nothing has changed
We have a Visual Studio Database Project consisting of about 129 Tables. It is the primary database for our Internal Web Based CRM/Call Centre product, which is still under active development. We use the SSDT Publish from within VS to deploy to instances as changes are made. We develop locally via S...
We have a Visual Studio Database Project consisting of about 129 Tables. It is the primary database for our Internal Web Based CRM/Call Centre product, which is still under active development.
We use the SSDT Publish from within VS to deploy to instances as changes are made. We develop locally via SQL Express (2016), also have a LAB environment for performance and load tests running SQL 2014, a UAT environment running 2012 and finally a deploy to production which is running SQL 2016.
All environments (Except Production) the script generated on publish is very good, only does the changes. The production script does a massive amount more work. Seems to drop and recreate a lot more tables, that I know have not changed (37 tables last deploy). Some of these tables have rows in the millions, and the whole publish is taking upwards of 25mins.
If I repeat the publish to production, it again drops and recreates 37 tables. The production DB does have replication which I have to disable before deployments (unsure if that's a factor).
I don't understand what the Production publish always wants to drop and recreate tables even though nothing has changed. I'm hoping to get some advice as to where to look to establish why SSDT thinks these need to be re-created.
using Visual Studio Professional 2017 V 15.5.5 and SSDT 15.1
OJay
(371 rep)
Aug 15, 2018, 11:01 PM
• Last activity: Sep 27, 2024, 07:58 AM
0
votes
2
answers
83
views
Is it constraint on database client-server deployment (Firebird, Postgresql, Oracle) that it should not be made over WiFi (due to unreliability)?
Database client-server connectivity can sometimes happen. I mean: there is client side dll that sits on the client computer (device, it can be mobile phone as well). For the Windows OS based clients: - fbclient.dll, gds32.dll - Firebird - libpq.dll - Postgresql - oci.dll - Oracle (of course, these d...
Database client-server connectivity can sometimes happen. I mean: there is client side dll that sits on the client computer (device, it can be mobile phone as well). For the Windows OS based clients:
- fbclient.dll, gds32.dll - Firebird
- libpq.dll - Postgresql
- oci.dll - Oracle
(of course, these dll can have dependencies). And the usual scenarios are: the application (it may be fat client of the client-server application) loads DB client library (fbclient.dll, oci.dll, etc.) in its process space and calls dll commands. dll - in turn - establishes connection with the server process (e.g. Firebird server process that runs on 3050 port on the server machine, Postgresql server process that runs no 5432 port on the server machine, Oracle TNS Listener on 1521 port) and keeps this connection open for the duration of the db client-server session.
E.g. user can authenticat and start connection and issue multiple SQL commands over this connection.
I understand that today the client-server DB communication usually hapens in the wired environment exclusively. E.g. either DB server and Web/Application server sit on the same machine or either they are in the different machines that are connected by very fast and reliable wired network.
It was differently in the early days of IT. That was the time when client-server architecture was quite common - client machines had fat clients of the application and they used database dlls on their machine and these dlls communicated with the DB services on the server machine.
Such applications are still in use and maybe there are some other scenarios where client-server DB connectivity can happen even today.
So - all that was background. **My question is - can database client-server connectivity be deployed over WiFi networks** *(or over networks which has WiFi link, e.g. employee laptop can connect to the WiFi Access Point then then this access point has wired connection the database server - this can be pretty common setting both in the work office and home office situations)* **or is it common DBA understanding that no client-server connectivity can involve WiFi link.**
My collegues adheres to the quite strict rules that there can be no deployment of our solutions which involve client-server DB connectivity over WiFi. I have seen, that such deploymen is possible though. I can not believe that such constraint can really be in force? Are WiFi networs really so unreliable although they are quite popular?
TomR
(101 rep)
Aug 29, 2024, 06:26 AM
• Last activity: Sep 8, 2024, 03:30 AM
1
votes
0
answers
608
views
Railway - problems with connecting to postgres in production
**Django worked perfectly locally, but had errors in production (I was using the same postgres DB and configuration).** **Django server, hosted on Railway gave me this:** [![enter image description here][1]][1] **Newtorking tab in Postgres settings on Railway:**[![enter image description here][2]][2...
**Django worked perfectly locally, but had errors in production (I was using the same postgres DB and configuration).**
**Django server, hosted on Railway gave me this:**
**Newtorking tab in Postgres settings on Railway:**
**Postgres logs locally:**
**Settings:** (WORKED PERFECTLY ON A LOCAL MACHINE WITH THIS IN-PROD DATABASE AND SAVED DATA)
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2', //tried django.db.backends.postgresql, not working
'NAME': os.getenv('DB_NAME'),
'USER': os.getenv('DB_USER'),
'PASSWORD': os.getenv('DB_PASSWORD'),
'HOST': os.getenv('DB_HOST'),
'PORT': os.getenv('DB_PORT'),
}
}
DATABASE_URL = os.getenv('DATABASE_URL')
**.env:** (everything in {{}} is a variable)
DATABASE_PRIVATE_URL={{DATABASE_PRIVATE_URL}} //not using this
DATABASE_URL={{DATABASE_URL}}
DB_HOST=viaduct.proxy.rlwy.net
DB_NAME=railway
DB_PASSWORD={{DB_PASSWORD}}
DB_PORT=19232
DB_USER=postgres
SECRET_KEY={{SECRET_KEY}}
**So I did:**
@my-MBP django_project % psql -h localhost -l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+-----------------+----------+---------+-------+-----------------------
postgres | | UTF8 | C | C |
template0 | | UTF8 | C | C | =c/ +
| | | | | =CTc/
template1 | | UTF8 | C | C | =c/ +
| | | | | =CTc/
(3 rows)
@my-MBP django_project % psql -h localhost -d postgres
psql (14.11 (Homebrew))
Type "help" for help.
postgres=#
**And got (On Railway):**
***Django server:***
***Postgres Database:***
**Doing this locally gives me the same error that was given in production**
@my-MBP django_project % psql -h /var/run/postgresql -d postgres
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?
Again, before I ran





psql -h localhost -d postgres
everything was working perfectly on a LOCAL Django server. But in production, with the same configuration and database (hosted) it was not working.
Mikheil
(11 rep)
May 8, 2024, 04:40 AM
• Last activity: May 8, 2024, 01:52 PM
1
votes
4
answers
1288
views
SSIS deployment failed - working with files
I've developed a simple SSIS package that will open a csv file, select a number of column, add two derived columns and then copy the results to a file. This works within visual studio (even if it takes ages) where I constructed it. The problem is when I attempt to deploy it to our sql server it fail...
I've developed a simple SSIS package that will open a csv file, select a number of column, add two derived columns and then copy the results to a file.
This works within visual studio (even if it takes ages) where I constructed it. The problem is when I attempt to deploy it to our sql server it fails.
It keeps say that:
"The specified parameter value string is too long. The string must be no more than 8000 characters."
But I have no idea what it is talking about as none of my columns is over 1000 characters.
There isn't much info on this error but is it possible that the number of rows (400,000+) is the source of the problem?
Exostrike
(183 rep)
Jul 26, 2018, 03:35 PM
• Last activity: Dec 8, 2023, 11:13 AM
0
votes
0
answers
37
views
Managed database or docker image with data-volume?
I've mostly used managed database (AWS RDS) for production. I was fiddling with docker and was wondering if it's a good idea to have containerised postgres database with data-volume? I feel it may not be a good idea, maybe because, I'm used to convenience of using managed database in production, but...
I've mostly used managed database (AWS RDS) for production. I was fiddling with docker and was wondering if it's a good idea to have containerised postgres database with data-volume?
I feel it may not be a good idea, maybe because, I'm used to convenience of using managed database in production, but would like to know community opinion on this.
My docker-compose.yml looks like following:
version: '3'
services:
rails-api:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
- gem_cache:/gems
depends_on:
- database
networks:
- vilio-network
env_file:
- .env/production
redis:
image: redis
database:
image: postgres
ports:
- "5432:5432"
networks:
- vilio-network
env_file:
- .env/production
volumes:
- db_data:/var/lib/postgresql/data
networks:
vilio-network:
volumes:
db_data:
gem_cache:
Thank you
Indyarocks
(101 rep)
Sep 21, 2023, 08:34 PM
4
votes
3
answers
7779
views
Replication Blocks Code Deployment
We have a database (SQL Server 2008r2) that is being transactionally replicated, namely tables, sprocs, views, indexed views and UDFs only, to another server (SQL Server 2014) as part of a cautious (read: slow) upgrade process. However, we still need to deploy new code or changes to it in the meanwh...
We have a database (SQL Server 2008r2) that is being transactionally replicated, namely tables, sprocs, views, indexed views and UDFs only, to another server (SQL Server 2014) as part of a cautious (read: slow) upgrade process. However, we still need to deploy new code or changes to it in the meanwhile.
Whenever we try to deploy new/changed code to it, using DACPAC under Visual Studio, we get a slew of the following errors:
Error SQL72035: [dbo].[SomeTable] is replicated and cannot be modified.
Is there anyway to avoid this? I was thinking if we could script the drop and create of the replication, we could put it in pre and post-deploy script, respectively, but I'm not sure if a) that is a good approach, and b) common practice.
alphadogg
(205 rep)
May 21, 2014, 12:22 PM
• Last activity: Jun 8, 2023, 06:46 PM
8
votes
3
answers
15416
views
Command to validate a postgresql configuration file?
I am generating postgresql configuration files via ansible, and putting them in `/etc/postgresql/XX/main/conf.d/whatever.conf`. I accidentally made a syntax error and broke my postgresql, requiring manual fixing. Is there any postgresql command to validate that a file is a valid postgresql.conf file...
I am generating postgresql configuration files via ansible, and putting them in
/etc/postgresql/XX/main/conf.d/whatever.conf
. I accidentally made a syntax error and broke my postgresql, requiring manual fixing.
Is there any postgresql command to validate that a file is a valid postgresql.conf file?
sudoers
files can be validates with /usr/sbin/visudo -cf path/to/file
. Is there anything for postgresql?
I'm currently running Ubuntu 18.04 & 20.04, with PostgreSQL 10, 12 etc (yes several different versions).
Amandasaurus
(1007 rep)
Jan 17, 2022, 03:03 PM
• Last activity: Mar 31, 2023, 11:23 AM
34
votes
2
answers
28356
views
Why is :r SQLCMD command marked as wrong in Post Deployment Script?
I have worked a couple of times with post deployment scripts and always, intuitively used the build action "PostDeploy", because that is what it is. Now for the first time I try to follow the built-in instruction from the script's template to use the `":r somescript.sql"` syntax. Immediately this li...
I have worked a couple of times with post deployment scripts and always, intuitively used the build action "PostDeploy", because that is what it is.
Now for the first time I try to follow the built-in instruction from the script's template to use the
":r somescript.sql"
syntax.
Immediately this line is getting marked as wrong:
> "SQL80001 wrong syntax next to ':'"
I found suggestions to set the PDS to Build Action "none". This does not help, the error stays. What am I missing here?
Magier
(4827 rep)
Sep 7, 2015, 07:58 AM
• Last activity: Jan 17, 2023, 12:45 PM
0
votes
1
answers
76
views
How to reset a cloud Postgres DB as if it was just created before each demo deployment?
I was using Heroku for projects in development. Heroku has a great option to reset Postgres DB: https://devcenter.heroku.com/articles/managing-heroku-postgres-using-cli#pg-reset. Now I'm using other cloud services for Postgres hosting which don't have such feature. Since during the development phase...
I was using Heroku for projects in development.
Heroku has a great option to reset Postgres DB: https://devcenter.heroku.com/articles/managing-heroku-postgres-using-cli#pg-reset .
Now I'm using other cloud services for Postgres hosting which don't have such feature.
Since during the development phases the DB can change considerably I would like to automatically reset the DB during each deployment before being filled again by my new code.
How do I reset the DB (as if it was just created) if the hosting doesn't provide me a CLI/API to do it but only a web interface and of course the connection string?
Fred Hors
(101 rep)
Dec 23, 2022, 12:00 AM
• Last activity: Dec 23, 2022, 03:13 AM
2
votes
2
answers
2705
views
Disk block size for mongodb deployment
Why does [Mongodb production notes](https://docs.mongodb.com/manual/administration/production-notes/) have no recommendations about the disk block size? AFAIU, disk block size would have a considerable impact on a throughput intensive application. Does disk block size have any impact on mongodb perf...
Why does [Mongodb production notes](https://docs.mongodb.com/manual/administration/production-notes/) have no recommendations about the disk block size? AFAIU, disk block size would have a considerable impact on a throughput intensive application. Does disk block size have any impact on mongodb performance in practice?
ankshah
(143 rep)
Nov 16, 2018, 10:10 AM
• Last activity: Nov 20, 2022, 07:02 PM
5
votes
1
answers
15643
views
How to register CLR assembly as trusted in SSDT deployment
I have CLR assembly in SSDT and to deploy that it has to be trusted. What I understand there are 4 options how to do that ### First option, use TRUSTWORTHY ```sql EXEC sp_configure 'clr enabled', 1; RECONFIGURE; ALTER DATABASE SourceDatabase SET TRUSTWORTHY ON; ``` ### Second option, disable strict...
I have CLR assembly in SSDT and to deploy that it has to be trusted. What I understand there are 4 options how to do that
### First option, use TRUSTWORTHY
EXEC sp_configure 'clr enabled', 1;
RECONFIGURE;
ALTER DATABASE SourceDatabase SET TRUSTWORTHY ON;
### Second option, disable strict security
EXEC sp_configure 'clr enabled', 1;
RECONFIGURE;
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'clr strict security', 0;
RECONFIGURE;
### Third option, sign assembly with key or certificate
Seems complicated and I was not able to manage that yet. I will appreciate the instructions, because the workflow is not clear here.
### Fourth option, use sp_add_trusted_assembly
EXEC sp_configure 'clr enabled', 1;
RECONFIGURE;
declare @assembly varbinary(max) = 0x4D5A90000300000004000000FFFF0000... -- I have to manually copy that from the failed SQL publish file.
declare @hash varbinary(64) = HASHBYTES('SHA2_512', @assembly);
EXEC sys.sp_add_trusted_assembly @hash, N'Foo Assembly';
In the 4th option I have to manually register the assembly as trusted and only after that I can publish the assembly. It's possible to somehow automate this process?
I am thinking about creating pre-deployment script
that can run the 4th option code but I don't know how to populate the @assembly variable from the file of the assembly .dll
.
Alternatively, if it's possible to deploy assembly as untrusted I can make it trusted on the server with the following code (post-deployment script
)
-- Register all database assemblies as trusted
declare @name nvarchar(4000),
@content varbinary(max);
DECLARE appCursor CURSOR FAST_FORWARD FOR
SELECT [name], content
FROM SourceDatabase.sys.assembly_files
OPEN appCursor
FETCH NEXT FROM appCursor INTO @name, @content
WHILE @@FETCH_STATUS = 0
BEGIN
DECLARE @hash varbinary(64) = HASHBYTES('SHA2_512', @content);
EXEC sys.sp_add_trusted_assembly @hash, @name;
FETCH NEXT FROM appCursor INTO @name, @content
END
CLOSE appCursor
DEALLOCATE appCursor
GO
Any idea or what's your approach?
Muflix
(1099 rep)
Oct 16, 2022, 04:31 PM
• Last activity: Oct 16, 2022, 05:41 PM
Showing page 1 of 20 total questions