Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
2
votes
1
answers
89
views
Minimal Permissions to Create an Application Database and Database Owner
I wish to write (or borrow if you have one) a script template for creating an application database and associated logins. The script is to be run in CI pipelines and on developer desktops. Possibly influenced by postgres but also by [MS maybe-saying that assigning ownership to a disabled login is be...
I wish to write (or borrow if you have one) a script template for creating an application database and associated logins. The script is to be run in CI pipelines and on developer desktops.
Possibly influenced by postgres but also by MS maybe-saying that assigning ownership to a disabled login is best practice I wish to assign ownership of the database to some insignificant login created for the purpose:
- I believe that
How can I resolve this, - without the CI scripts using an
Create Login appname_owner With Password = 'randomlygeneratedpassword';
Alter Login appname_owner Disable
Create Database appname ;
Alter Authorization On database::appname To appname_owner ;
**Question**: What is the minimal set of permissions that will allow a CI login to do this?
- I believe that
Create Any Database
and Alter Any Login
are minimal for create login & database. [ I can't say I'm happy with CI scripts needing Alter Any Login
. It would be nicer to restrict alterable Logins to some group or role defined for the purpose.]
- But I'm stumped on Alter Authorization
. It seems I need Grant Impersonate On Login::appname_owner To CILogin
, which the CILogin can't grant to itself, not even for a login it has just created.
How can I resolve this, - without the CI scripts using an
sa
login and
- preferably, without granting the CILogin Impersonation rights on all logins to the server
?
Chris F Carroll
(214 rep)
May 19, 2021, 11:08 AM
• Last activity: May 20, 2021, 02:33 AM
0
votes
0
answers
45
views
How can I start Oracle as a lightweight process?
I have OracleDB running in a CI pipeline in a docker image. With each test it is started and then shutdown. It's purely there because Django tests expect a real database. However it takes a couple of minutes each time to start and shutdown. Is there a way to make it start really quickly and use less...
I have OracleDB running in a CI pipeline in a docker image. With each test it is started and then shutdown. It's purely there because Django tests expect a real database.
However it takes a couple of minutes each time to start and shutdown. Is there a way to make it start really quickly and use less memory?
Rol
(111 rep)
Apr 9, 2021, 09:44 AM
• Last activity: Apr 9, 2021, 10:52 AM
1
votes
0
answers
217
views
How to improve Oracle Standard Edition's performance for testing?
There's a great post on StackOverflow about improving Postgres performance for testing. > https://stackoverflow.com/questions/9407442/optimise-postgresql-for-fast-testing/9407940#9407940 However, there aren't any resources on doing the same for OracleDB. I don't have a license for Enterprise Edition...
There's a great post on StackOverflow about improving Postgres performance for testing.
> https://stackoverflow.com/questions/9407442/optimise-postgresql-for-fast-testing/9407940#9407940
However, there aren't any resources on doing the same for OracleDB. I don't have a license for Enterprise Edition, that has features like 'In-Memory' columnar storage that would almost definitely improve performance.
> https://docs.oracle.com/en/database/oracle/oracle-database/19/inmem/intro-to-in-memory-column-store.html
I'm really limited in what I can try in Standard Edition. It's running in a Docker container in a CI pipeline. I've tried putting the tablespace on a RAM disk, but that doesn't improve performance at all. I've tried fiddling with FILESYSTEMIO_OPTION, but no performance change.
Would anyone know of some more obvious things I can do in OracleDB in a CI environment?
Rol
(111 rep)
Feb 13, 2021, 07:12 PM
0
votes
1
answers
153
views
Deploying Oracle ADW Changes from development environment to production
I'm currently running an Oracle Autonomous Data Warehouse (Oracle's cloud-based database offering) on version 19c. I have a development database on the Always Free Tier and my production database is a paid instance (with backups etc). I will likely also be making a staging instance soon. When making...
I'm currently running an Oracle Autonomous Data Warehouse (Oracle's cloud-based database offering) on version 19c. I have a development database on the Always Free Tier and my production database is a paid instance (with backups etc). I will likely also be making a staging instance soon.
When making changes to the database schema we have been making changes to my database schema as follows:
1. Change schema on the development environment using SQL statements (
CREATE
, ALTER
, etc)
2. Test the development environment manually
3. Change the schema on the production environment using identical SQL statements
4. Test the production environment to make sure nothing is broken
I am aware that this is not the way it should be done, we need to implement some CI/CD to streamline these changes but we've only recently come out of our piloting phase where it was a rush to get the product working etc, I'm very keen to firm up our new systems.
How I'm imagining it'd go (changes in bold):
1. Change schema on the development environment using SQL statements (CREATE
, ALTER
, etc)
2. Test the development environment with automated tests
3. Apply some tool to apply the same changes to my production system
4. Test the production environment to make sure nothing is broken
I have seen that tools exist for other SQL database tools, such as MySQL, PostgreSQL etc but from my searches on Oracle-specific tools, I'm not sure if where to find a tool that can propagate the schema changes from our development database to a production database.
If you know of any tools that I can look at to do this functionality then I'd appreciate a pointer on where to look!
I'm aware this is similar to other questions, the ones I've looked at are:
- https://dba.stackexchange.com/questions/61688/altering-db-schema-in-production-environment
- https://dba.stackexchange.com/questions/28478/upgrade-production-database-during-software-development-lifecycle
Both of these questions were asked several years ago and so I'm not sure if new things have come up recently.
Ash Oldershaw
(121 rep)
Jun 30, 2020, 01:12 PM
• Last activity: Jun 30, 2020, 03:14 PM
4
votes
1
answers
1155
views
Missing master.dacpac publishing dacpac Azure Devops
**The problem** I have been trying to get a build pipeline to work in Azure DevOps to demo how we could build and test our databases. I've come to the point where I want to publish my DACPAC but it gives me this error: [![publish dacpac error][1]][1] This throws me off because the master.dacpac is i...
**The problem**
I have been trying to get a build pipeline to work in Azure DevOps to demo how we could build and test our databases.
I've come to the point where I want to publish my DACPAC but it gives me this error:
This throws me off because the master.dacpac is in the same directory as the dacpac that I'm trying to deploy.
**Let's first explain how I set everything up.**
I have two stages, build and test.
**The build stage**
The build runs on a hosted agent with this pool
**What have I already done to fix it**
The first thing I checked was of course if the dacpac was there. That's where the debug message in the task is for to show all the files.
The next thing I did was to make sure that the project file did not have the path bug where it specifies the direct path but uses the
Besides those tests, I also did a whole bunch of testing with permissions, making sure that I ran the command with
I've come to the point that I have tried every little thing I could think of to make this work.
Does anybody know what I can do to fix this.

vmImage: vs2017-win2016
The build stage runs fine and in the end it copies all the files using the
CopyFiles task
- task: CopyFiles@2
displayName: "Copy files to artifact directory"
inputs:
Contents: |
**\$(database)-Tests\bin\$(configuration)\*.*
**\*.publish.xml
TargetFolder: '$(Build.ArtifactStagingDirectory)'
FlattenFolders: true
OverWrite: true
This results in a directory containing all the dacpac files and publish profiles.
The stage then publishes the artifact
- task: PublishBuildArtifacts@1
displayName: 'Publish artifact'
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: $(artifactname)
publishLocation: 'Container'
This all works as it's supposed to do.
**The testing stage**
During the testing stage I want to run a docker container that uses the latest SQL Server on Linux image.
The testing stage uses this pool vmImage: ubuntu-18.04
I set up these variables in the stage
variables:
variables:
dockerimage: 'mcr.microsoft.com/mssql/server:2017-latest'
dockerpublishport: 1433
dockername: sql1
dockersqlpw: ''
testresultpath: $(Build.Repository.LocalPath)/build
dacpacfile: $(System.ArtifactsDirectory)/$(artifactname)/$(database)-Tests.dacpac
publishfile: $(System.ArtifactsDirectory)/$(artifactname)/$(database)-Tests.publish.xml
The first step is to download the artifact that was created during the build stage
- task: DownloadBuildArtifacts@0
displayName: 'Dowload Artifacts'
inputs:
buildType: 'current'
downloadType: 'specific'
downloadPath: '$(System.ArtifactsDirectory)'
Because you have to use sqlpackage to deploy the dacpac I also created a task to download that executable.
- task: Bash@3
displayName: 'Install sqlpackage'
inputs:
targetType: 'inline'
script: |
echo 'Creating sqlpackage dir'
sudo mkdir $(Build.Repository.LocalPath)/sqlpackage
# Install sqlpackage
echo 'Downloading sqlpackage'
sudo wget -q -O $(Build.Repository.LocalPath)/sqlpackage/sqlpackage.zip https://go.microsoft.com/fwlink/?linkid=2108814
echo 'Extracting sqlpackage.zip'
sudo unzip -qq $(Build.Repository.LocalPath)/sqlpackage/sqlpackage.zip -d /$(Build.Repository.LocalPath)/sqlpackage
echo 'Changing sqlpackage permissions'
sudo chmod +x $(Build.Repository.LocalPath)/sqlpackage/sqlpackage
I then start the docker run
command
- task: Bash@3
displayName: 'Start SQL Server Container'
inputs:
targetType: 'inline'
script: 'docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=$(dockersqlpw)" -p $(dockerpublishport):1433 --name $(dockername) -d $(dockerimage)'
Up to here everything is running fine. The task that gives me trouble is the actual publish of the dacpac.
- task: Bash@3
displayName: 'Publish DACPAC'
inputs:
targetType: 'inline'
script: |
echo 'Current Directory'
pwd
echo 'Changing directory to artifact dir'
cd $(System.ArtifactsDirectory)/$(artifactname)
echo 'Current Directory'
pwd
echo 'Get files in directory'
ls -la
echo '$(Build.Repository.LocalPath)/sqlpackage/sqlpackage /a:Publish /tsn:$(dockerip) /tdn:$(database) /tu:sa /tp:"$(dockersqlpw)" /sf:$(dacpacfile) /pr:$(publishfile) /p:IncludeCompositeObjects=true'
sudo $(Build.Repository.LocalPath)/sqlpackage/sqlpackage /a:Publish /tsn:$(dockerip) /tdn:$(database) /tu:sa /tp:"$(dockersqlpw)" /sf:$(dacpacfile) /pr:$(publishfile) /p:IncludeCompositeObjects=true
The tasks executes the following:
1. return the current directory
2. change to artifact directory and show that I'm actually in it
3. show the files to be sure that they're all there
4. run the command to publish the dacpac

$(dacpacroot)
variable.

sudo
and even tried putting the actual master.dacpac in my solutions to reference it locally but nothing worked.

SQLStad
(366 rep)
Nov 5, 2019, 08:11 AM
• Last activity: Dec 29, 2019, 06:03 AM
-1
votes
1
answers
579
views
Automated testing of SQL scripts
I'd like to run some simple tests on SQL scripts in CI to just ensure the scripts are valid SQL (PostgreSQL). While I could run Postgres in Docker or even use an in-memory DB such as Sqlite (it's close enough), that'd require creating the tables etc. first and the people writing these scripts are to...
I'd like to run some simple tests on SQL scripts in CI to just ensure the scripts are valid SQL (PostgreSQL). While I could run Postgres in Docker or even use an in-memory DB such as Sqlite (it's close enough), that'd require creating the tables etc. first and the people writing these scripts are too lazy to put in all that effort.
I've done a lot of Googling and to my dismay, I wasn't able to find a single tool to do only syntax validation and ensure parsability. Does anyone know of a tool/framework, in any language, that can do this?
lfk
(129 rep)
Nov 12, 2019, 07:45 AM
• Last activity: Nov 12, 2019, 10:52 PM
0
votes
0
answers
248
views
Add SQL Server Level Login in master Database using Visual Studio 2019
I'm trying to add sever login in Azure SQL Server using Visual Studio 2019 SQL Project with SSDT and deploy it using Azure Devops by building CICD pipeline. The rest database object (e.g. tables, sprocs, external data sources, external tables, etc) created successfully except database logins. My dat...
I'm trying to add sever login in Azure SQL Server using Visual Studio 2019 SQL Project with SSDT and deploy it using Azure Devops by building CICD pipeline. The rest database object (e.g. tables, sprocs, external data sources, external tables, etc) created successfully except database logins.
My database project already had security folder that include login scripts (since i'm importing from existing database). The problem is, in order to add a server login, you must to add it to master databases and my CICD pipeline line set to non master database.
How do I add sever login to Azure SQL database using VS2019 and CICD via Azure Devops?
Wulung Triyanto
(101 rep)
Jul 31, 2019, 06:10 PM
5
votes
1
answers
1218
views
How can we manage cross-database dependencies across environments?
I've put off asking this question for some time since it's tricky to encapsulate our situation and challenges without a wall of text, but the situation is getting worse so I'll do my best. I'm looking for some assistance in improving the way we develop and manage our application database and develop...
I've put off asking this question for some time since it's tricky to
encapsulate our situation and challenges without a wall of text, but the
situation is getting worse so I'll do my best. I'm looking for some assistance in improving the way we develop and manage our application database and developer environments, particularly in regards to database dependencies across environments.
### About us
We're a medium-sized company with a fair amount of legacy code. To give an sense of what our current application database looks like, here are some ballpark numbers: 50gb, 450 tables, 200 views and 400 stored procedures. In addition, our production server runs approximately 15 databases, most of which needs or is needed by our application database.
To clarify: When I say "need", I'm referring to database objects that will not
compile/will compile but not run without the dependency. Examples of these
objects are server objects like Linked Servers and Replication Subscriptions, or
database objects like Stored Procedures and Views.
Over the past year, we've made significant improvements to the way we develop
and deploy our database. Improvements so far include the introduction of
dedicated developer environments, version control of (nearly) all database code,
automated deployment from Git (trigger-based) and the transition to a SQL Server
cluster.
### The issue
What we're struggling with, and which I cannot seem to find suitable resources for, is how to deal with dependencies from our application database to other databases. These dependencies fall into two separate challenges:
**1. Databases on the same server**
At the moment of speaking, our application database depends upon 5 databases on the same server. These are databases with separate repositories, deployment pipelines, libraries and web projects. While bootstrapping developer environments, we must take care to create these in a particular order in order to successfully apply DDL and DML-scripts, lest we face dependency errors. This process alone is causing a lot of headache. In fact, it is causing so much headache that some of our developers have simply given up on local developer environments and do all of their development in a shared database.
**2. Databases on a remote server only accessible to production**
In our production environment, we import data from a handful of remote SQL Server instances. Some of this data is imported using stored procedures that reference remote servers using Linked Server objects. In order for the stored procedure to run, the Linked Server object needs to exist. In order for the Linked Server object to "successfully" exist, the remote server it references must be reachable. The remote servers are only accessible from our production server (and rightly so), but this causes our stored procedures not compile properly during deployment.
In the book "Continuous Delivery", author Dave Farley emphasizes that in true Continuous Integration, every single resource required to assemble and run a project should reside in its repository. Additionally, he specifies that every environment should be identical (with the exception of configurations such as credentials and connection strings). Our application does not satisfy these principles, and I'm not even sure if it's feasible to do so.
**Our tools**
- Database server: Microsoft SQL Server 2017 - Build server: Visual Team Services - Build tools: Redgate DLM Automation Suite It feels like I'm missing out on some core architectural principles here. What can we do to alleviate these issues? Suggestions to relevant literature are also welcome.
- Database server: Microsoft SQL Server 2017 - Build server: Visual Team Services - Build tools: Redgate DLM Automation Suite It feels like I'm missing out on some core architectural principles here. What can we do to alleviate these issues? Suggestions to relevant literature are also welcome.
krystah
(747 rep)
Jun 13, 2018, 01:28 PM
• Last activity: Jun 23, 2018, 12:48 PM
Showing page 1 of 8 total questions