Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
2
votes
1
answers
667
views
Split SSIS project in to multiple files to avoid merge hell
Since SSIS does not like being merged nicely I was wondering how to have a big SSIS package split up. With the idea of having multiple devs working on this simultaneously, and a minimal chance of having merge conflicts. In SSIS 2016 I found these options: - ***package parts***, but apparently they d...
Since SSIS does not like being merged nicely I was wondering how to have a big SSIS package split up. With the idea of having multiple devs working on this simultaneously, and a minimal chance of having merge conflicts.
In SSIS 2016 I found these options:
- ***package parts***, but apparently they don't share connection managers. I don't want to have 100 different connection managers.
- ***subpackages***, still this doesn't look very clean and I also wonder if this is what it is intended for. Also, the debugger goes crazy too opening the subpackages while running. Any other drawbacks I should know of?
I can't be the only person with this problem. Is there an other way to achieve this?
Sam Segers
(129 rep)
Jan 19, 2017, 10:47 AM
• Last activity: Aug 5, 2025, 06:05 AM
0
votes
1
answers
171
views
How route traffic to a Azure SQL server via a jump server to prevent whitelistingof dev machine?
I have to connect to Azure SQL server (e.g. mydatabase.azuredb.windows.net:1433). In order to do so, i have to inform my ADMIN to whitelist my machine's IP, works fine that day. NExt day my machine restarts and i get a another IP address. I have to inform and wait for whitelisting and cycle continue...
I have to connect to Azure SQL server (e.g. mydatabase.azuredb.windows.net:1433). In order to do so, i have to inform my ADMIN to whitelist my machine's IP, works fine that day. NExt day my machine restarts and i get a another IP address. I have to inform and wait for whitelisting and cycle continues. We work using VPN and have a server which (windows, onpremise) behind the VPN and has static IP.
Is is possible to route Azure SQL server traffic from my machine via this server?
AsitK
(101 rep)
Sep 11, 2023, 03:48 PM
• Last activity: Jul 10, 2025, 12:04 AM
2
votes
2
answers
729
views
Is there a best practice for resolving dependencies of SQL objects for build during development?
I have a greenfield project involving a SQL database that is under development. There is no production data involved yet. There is a rather long list of specifications which I expect can be implemented with a collection of SQL tables, views, functions, and procedures. The overview of the inner loop...
I have a greenfield project involving a SQL database that is under development. There is no production data involved yet. There is a rather long list of specifications which I expect can be implemented with a collection of SQL tables, views, functions, and procedures. The overview of the inner loop of the development process is basically the following:
1. drop and recreate the database
2. create the tables, views, etc
3. import test data
4. run tests
5. make changes to code
6. repeat beginning at (1)
The development has already involved a number of refactorings. The results of these refactorings are good. However, they usually precipitate a period of [dependency hell](https://en.wikipedia.org/wiki/Dependency_hell#:~:text=Dependency%20hell%20is%20a%20colloquial,versions%20of%20other%20software%20packages.) in step (2) during which I have to manually re-resolve the order of the files defining the objects are executed. That file order is currently hard-coded as a list of file names in a script. Each file has exactly one object defined in it.
The process of manually resolving dependencies works, but is labor-intensive. For C and .Net projects in the past I have used [make](https://en.wikipedia.org/wiki/Make_(software)#:~:text=In%20software%20development%2C%20Make%20is,to%20derive%20the%20target%20program.) and [MSBuild](https://en.wikipedia.org/wiki/MSBuild) to manage dependencies, respectively. I can almost imagine how to manage SQL dependencies with those tools, but I haven't seen it done and I'd rather not re-invent the wheel if there is a method that is already proven. A search of stackoverflow for [makefile](https://stackoverflow.com/questions/tagged/makefile+sql) and [MSBuild](https://stackoverflow.com/questions/tagged/msbuild+sql) with SQL didn't produce any results that were obviously on-point.
Is there a best practice for resolving dependencies of SQL objects for build during development?
alx9r
(449 rep)
Feb 25, 2023, 05:59 PM
• Last activity: Apr 5, 2025, 03:47 PM
0
votes
1
answers
252
views
Is there a version of System.Data.SqlClient that does not support Encryption for Connections?
I am looking at IIS server that hosts tens of applications. Some of them are pretty old. All of them are using connection strings that are stored in machine.config in C:\Windows\Microsoft.NET\Framework64. All connection strings are pointing to SQL Server 2016, and are using System.Data.SqlClient for...
I am looking at IIS server that hosts tens of applications. Some of them are pretty old. All of them are using connection strings that are stored in machine.config in C:\Windows\Microsoft.NET\Framework64.
All connection strings are pointing to SQL Server 2016, and are using System.Data.SqlClient for connections:
providerName="System.Data.SqlClient"
The goal is to migrate data to new Azure SQL Managed Instance and replace connection strings so they point to new server.
In Azure SQL Managed Instance, encryption for connections is mandatory, you can't turn it off. You can only select minimal TLS version (1.0, 1.1 or 1.2).
MI connection strings contain
Encrypt=True;TrustServerCertificate=False
. I've tried setting Encrypt=False
in SSMS for test, but then querying sys.dm_exec_connections
encrypt_option was still showing TRUE
.
Do all versions of System.Data.SqlClient support encryption for connections, or there are some old versions that do not support it?
Asking this, because I want to prevent situation where after migration some app potentially can stop working because its client driver does not support encryption.
Aleksey Vitsko
(6195 rep)
Mar 13, 2024, 01:29 PM
• Last activity: Mar 15, 2024, 01:32 PM
0
votes
1
answers
84
views
How can I get started with developing on PostgreSQL?
Is there any tutorial on how can we build, debug or add features on PostgreSQL database source code, or even develop and debug extensions? I am interest on adding some features on PostgreSQL database logical replication, I have a few years to write C++, so I need a warm up.
Is there any tutorial on how can we build, debug or add features on PostgreSQL database source code, or even develop and debug extensions?
I am interest on adding some features on PostgreSQL database logical replication, I have a few years to write C++, so I need a warm up.
Stavros Koureas
(170 rep)
Jun 13, 2023, 07:43 AM
• Last activity: Jun 14, 2023, 07:23 AM
2
votes
2
answers
660
views
Is database development stages the same as software development?
In software development, there are a series of stages that it will goes through - Dev, Test, UAT, Staging, Demo and Production. This is what I feel is correct, after reading/researching through the Internet. Database development 1. Dev the database in a separate DB (with same version as the prod and...
In software development, there are a series of stages that it will goes through - Dev, Test, UAT, Staging, Demo and Production.
This is what I feel is correct, after reading/researching through the Internet.
Database development
1. Dev the database in a separate DB (with same version as the prod and
using only test data)
2. Test the dev database (with same version as the prod and using only test data)
3. Roll out to prod database
My questions are:
1. Am I correct on the above database development stages?
2. Is there an equivalent of UAT/Staging/Demo in database development?
3. If point 1 is correct, how do people create/work on the dev/test database and ultimately pushing it to prod database?
P.S: I am new to database so please go easy on me! Thank you!
SunnyBoiz
(153 rep)
Aug 9, 2022, 02:12 AM
• Last activity: Aug 23, 2022, 05:13 AM
-2
votes
1
answers
2651
views
Row_number to not increment if values on three columns are the same from the row before
Suppose I have a dataset that looks like the screenshot below and I need to add a count column for the rows based off the `orderid`, `DocumentNo_`, and `CurrPackageTrackingNo` with the following logic: * Count starts at 1 * If `orderid` is different from the `orderid` on the previous row, then the c...
Suppose I have a dataset that looks like the screenshot below and I need to add a count column for the rows based off the
So far Im able to get the logic on the first 3 bulletpoints using the query:
orderid
, DocumentNo_
, and CurrPackageTrackingNo
with the following logic:
* Count starts at 1
* If orderid
is different from the orderid
on the previous row, then the count should reset to 1.
* If the orderid
and DocumentNo_
is the same from the previous row but CurrPackageTrackingNo
is different on the previous row, then the count should increment by 1
* If the orderid
, DocumentNo_
, and CurrPackageTrackingNo
are all the same from the previous row, then count should remain at 1

select *
,row_number() over (partition by orderid order by packageNo_) as ranking
from tblsample
order by PackageNo_
But for rows 15-17 on the below screenshot, the ranking column has been incrementing when its supposed to stay at 1 for the mentioned rows. Can someone give me an insight on how I can fulfill that logic?

Keith Rivera
(617 rep)
May 9, 2022, 08:28 AM
• Last activity: May 9, 2022, 08:49 AM
6
votes
3
answers
549
views
Recommended SQL Server transaction logs settings for ephemeral databases?
I'm looking for recommendations on how should I set up transaction logs for databases that are used by a development team. These databases are ephemeral in the sense that we never care about data recovery in case of hardware/software failure. Instead, every time a developer starts a task, they will...
I'm looking for recommendations on how should I set up transaction logs for databases that are used by a development team. These databases are ephemeral in the sense that we never care about data recovery in case of hardware/software failure. Instead, every time a developer starts a task, they will create a new database and fill it with data from scratch anyway, so they'd do so as well in case of hardware failure. Another use case is automated testing, where a new database is created for each test run.
Right now, due to the usage patterns of developers (testing different types of queries, frequent data bulk-loading), the logs grow a lot and are more of a hindrance than help. We've seen cases where the logs started taking half a terabyte after only an hour of the developer's work, forcing them to manually truncate the logs. As we don't want to manually truncate the logs during automated tests, we need to allocate larger machines for them. I suspect that more I/O is required, slowing down operation.
Any recommendations I could find in both the SQL Server documentation and other materials are for production server and focus on data recovery, which is exactly the opposite of what I'm looking for.
What are some good practices regarding configuration of transaction logs of SQL Server when data recovery is irrelevant, instead ease of operation, resource usage and raw speed are a bigger concern?
liori
(289 rep)
Feb 25, 2021, 12:10 PM
• Last activity: Feb 25, 2021, 01:05 PM
4
votes
2
answers
3260
views
what would be the right permission to allow everything else but overwriting or creating a database?
I have a test environment where the AD group `mycompany\developers` is currently `sysadmin`. Recently I have had some issues, specially regarding to people restoring databases. My concerns are: 1. sometimes the database is [involved in replication][1] 2. the permissions are overwritten - the correct...
I have a test environment where the AD group
mycompany\developers
is currently sysadmin
.
Recently I have had some issues, specially regarding to people restoring databases. My concerns are:
1. sometimes the database is involved in replication
2. the permissions are overwritten - the correct way is to save the
current permissions before the restore and re-apply them after the restore.
3. Have you checked the current available disk space in all drivers,
specially those used in this restore?
4. there might be other people using or working on that specific
database, and restoring over it without communication can cause
someone's else work being lost
5. in the dev environment the databases can be in simple recovery mode.
Have you shrunk the log and changed the database to simple recovery
mode.
6. in this particular case, the full backup for every user database is
going to run automatically every night. but anyways, have you
checked for scheduled backups?
7. have you deleted from the server the backup that you used to do the
restore?
8. the folders and drives or data and logs are different in live and
test. are you sure you moved the files to the right places?
9. Have you had a look at orphaned users and logins?
I particularly have concerns when developers create new databases, even in the dev environment. why?
because they start by creating the database in dev and then just ask me to copy it to live.
have you done any capacity planning for this database? how big it should be and how much it would grow in a month?
1. have you isolated or considered different filegroups for demainding
objects\indexes?
2. Have you checked the current available disk space in all drivers,
specially those used in this DB creation?
3. how big is the transaction log and why? Initially I like to set it
1.5 times the biggest clustered index.
4. what are the autogrowth settings?
5. what are the current permissions?
I have realised some people are not so easy to deal with, I rather reduce their power over that server.
I don't want to do their jobs, but I would like to restrict them from creating new databases, or overwriting existing ones, either by restoring or by attaching files, etc.
They still should be able to create and run jobs, create\alter any object withing user databases, create logins, grant permissions etc...
Basically the question is:
How to limit the power of a current sysadmin?
I thought about not granting sysadmin but a set of server permissions (excluding those to create/alter databases) that would allow them to do everything else.
what would that set of server permissions be?
Marcello Miorelli
(17274 rep)
Nov 21, 2017, 11:01 AM
• Last activity: Nov 26, 2019, 10:09 AM
1
votes
1
answers
1619
views
How to Set Up a Shadow Production Database for PostgreSQL?
I'm working on a Django project and I'd like to avoid the common pitfall of using SQLite for development and PostgreSQL for production referenced in [Two Scoops of Django][1]. I'd like to take this a step further and use a "shadow production database" (shadow prod DB) for PostgreSQL. How can I get t...
I'm working on a Django project and I'd like to avoid the common pitfall of using SQLite for development and PostgreSQL for production referenced in Two Scoops of Django .
I'd like to take this a step further and use a "shadow production database" (shadow prod DB) for PostgreSQL. How can I get this set up? I think that once the shadow prod DB is set up and configured, swapping out the database settings from "production" to "development" (shadow Prod) will be trivial in the Django application's
settings.py
file.
However, when it comes to setting up the framework where the "shadow prod DB" is actually recording all of the same transactions that are occurring in the production database I have no idea where to start. Any help is greatly appreciated.
Further, what do I do if I want to get the shadow prod DB onto a laptop. What happens when the database size grows prohibitively large?
Thanks, Scott.
For posterity, I originally heard about the concept of a "shadow prod DB" on the podcast Scaling Postgres which linked to this post from Heap.
Scott Skiles
(111 rep)
Jan 23, 2019, 07:13 PM
• Last activity: Jan 24, 2019, 06:29 PM
4
votes
1
answers
920
views
How to develop a database (workflow)?
**tl;dr:** I know a few SQL semantics and I know how databases work (*basically*) but I never created one that had a considerable size. I lack of knowledge concerning workflow, SQL-code management and very basic SQL programming practices (write by hand or use e.g. pgAdminIII). I need some "*instruct...
**tl;dr:** I know a few SQL semantics and I know how databases work (*basically*) but I never created one that had a considerable size. I lack of knowledge concerning workflow, SQL-code management and very basic SQL programming practices (write by hand or use e.g. pgAdminIII). I need some "*instructions*" on how to manage my source code and state of my database.
----------
Okay, this sounds like a very basic question and in fact it is a *very basic question*.
Five years of computer science and I never really had to develop a database by myself and now I am here and have no idea how to do that. Now, as a software developer writing Java or C or whatever the workflow is pretty clear of course. Thinking about design, creating some files, write code, use version control and commit/push. Next day repeat.
My problem is that I can't get my head around that *incremental* nature of creating a database. What I mean is that I don't know how my SQL script should look like if I commit it to my repository and if I use e.g. something like FlywayDB.
At the beginning I'd have something like this e.g.
DROP TABLE IF EXISTS company_employee;
CREATE TABLE company_employee (
id BIGSERIAL PRIMARY KEY
);
and I need that
DROP TABLE IF EXISTS
because while I am developing I might change that table all the time and therefore drop it each time before I recreate it. But do I want to commit that script?
I mean if somebody accidentally runs it on my production server it would drop all my tables and re-create them. So one question I'm asking is e.g. how do my scripts look like that I am actually committing to my e.g. git repository?
Do I have different versions of that file? One that is e.g. a "*development*"-version and another is a "*I am confident it doesn't break anything*"-version that I commit?
Another option could be that I e.g. just create the tables in the first step:
-- V1__create-company-tables.sql
CREATE TABLE company (
id BIGSERIAL PRIMARY KEY
);
CREATE TABLE company_employee (
id BIGSERIAL PRIMARY KEY
);
and in a second file I add the foreign keys:
-- V2__adding-company-tables-foreign-keys.sql
ALTER TABLE company_employee
ADD COLUMN company_id BIGSERIAL,
ADD CONSTRAINT fk_company_employee_company FOREIGN KEY (company_id) REFERENCES company(id),
But that would mean that I'll never have all in one file - just fragments of my database in different files V1
to Vn
.
Another thing I could do is to just use pgAdminIII and add everything I need, then reverse engineer my database, run it with FlywayDB and commit that *generated* file to my database. But I don't like that idea since pgAdminIII uses deprecated stuff like WITH ( OIDS=FALSE )
.
So well, I would be glad if someone could bring me on track somehow and give me a scratch of how the actual development process of a database looks like.
Should I use pgAdminIII or write my whole database by hand?
How do I version my .sql
files? Where do I store them on my local machine?
pgAdminIII doesn't even seem to have something like a "Database Project" - it just offers access to modify my database but there is no source code management. I am really confused about how my setup would/should/could look like.
Stefan Falk
(165 rep)
Sep 23, 2015, 07:12 PM
• Last activity: Oct 31, 2018, 01:46 PM
8
votes
2
answers
1481
views
Do agile software development approaches apply to SQL?
I would like to learn if agile software development methods/principles/patterns are valid to SQL programming as well. If yes, where would be a good place to start learning about that? Are there any articles or books targeting agile development in SQL context?
I would like to learn if agile software development methods/principles/patterns are valid to SQL programming as well. If yes, where would be a good place to start learning about that? Are there any articles or books targeting agile development in SQL context?
Κύριε ἐλέησον
(165 rep)
Feb 28, 2012, 10:07 PM
• Last activity: Oct 29, 2018, 09:46 AM
5
votes
1
answers
1217
views
How can we manage cross-database dependencies across environments?
I've put off asking this question for some time since it's tricky to encapsulate our situation and challenges without a wall of text, but the situation is getting worse so I'll do my best. I'm looking for some assistance in improving the way we develop and manage our application database and develop...
I've put off asking this question for some time since it's tricky to
encapsulate our situation and challenges without a wall of text, but the
situation is getting worse so I'll do my best. I'm looking for some assistance in improving the way we develop and manage our application database and developer environments, particularly in regards to database dependencies across environments.
### About us
We're a medium-sized company with a fair amount of legacy code. To give an sense of what our current application database looks like, here are some ballpark numbers: 50gb, 450 tables, 200 views and 400 stored procedures. In addition, our production server runs approximately 15 databases, most of which needs or is needed by our application database.
To clarify: When I say "need", I'm referring to database objects that will not
compile/will compile but not run without the dependency. Examples of these
objects are server objects like Linked Servers and Replication Subscriptions, or
database objects like Stored Procedures and Views.
Over the past year, we've made significant improvements to the way we develop
and deploy our database. Improvements so far include the introduction of
dedicated developer environments, version control of (nearly) all database code,
automated deployment from Git (trigger-based) and the transition to a SQL Server
cluster.
### The issue
What we're struggling with, and which I cannot seem to find suitable resources for, is how to deal with dependencies from our application database to other databases. These dependencies fall into two separate challenges:
**1. Databases on the same server**
At the moment of speaking, our application database depends upon 5 databases on the same server. These are databases with separate repositories, deployment pipelines, libraries and web projects. While bootstrapping developer environments, we must take care to create these in a particular order in order to successfully apply DDL and DML-scripts, lest we face dependency errors. This process alone is causing a lot of headache. In fact, it is causing so much headache that some of our developers have simply given up on local developer environments and do all of their development in a shared database.
**2. Databases on a remote server only accessible to production**
In our production environment, we import data from a handful of remote SQL Server instances. Some of this data is imported using stored procedures that reference remote servers using Linked Server objects. In order for the stored procedure to run, the Linked Server object needs to exist. In order for the Linked Server object to "successfully" exist, the remote server it references must be reachable. The remote servers are only accessible from our production server (and rightly so), but this causes our stored procedures not compile properly during deployment.
In the book "Continuous Delivery", author Dave Farley emphasizes that in true Continuous Integration, every single resource required to assemble and run a project should reside in its repository. Additionally, he specifies that every environment should be identical (with the exception of configurations such as credentials and connection strings). Our application does not satisfy these principles, and I'm not even sure if it's feasible to do so.
**Our tools**
- Database server: Microsoft SQL Server 2017 - Build server: Visual Team Services - Build tools: Redgate DLM Automation Suite It feels like I'm missing out on some core architectural principles here. What can we do to alleviate these issues? Suggestions to relevant literature are also welcome.
- Database server: Microsoft SQL Server 2017 - Build server: Visual Team Services - Build tools: Redgate DLM Automation Suite It feels like I'm missing out on some core architectural principles here. What can we do to alleviate these issues? Suggestions to relevant literature are also welcome.
krystah
(747 rep)
Jun 13, 2018, 01:28 PM
• Last activity: Jun 23, 2018, 12:48 PM
3
votes
1
answers
1304
views
Tool to test RDBMS SQL dialect for ANSI SQL standard compliance
We are developing an interface to our internal data storage and manipulation tool that would allow people to use SQL dialect. We decided that it would be perfect to make our SQL dialect as close to the standard as possible. Currently I'm trying to find a tool that would test our implementation of qu...
We are developing an interface to our internal data storage and manipulation tool that would allow people to use SQL dialect.
We decided that it would be perfect to make our SQL dialect as close to the standard as possible.
Currently I'm trying to find a tool that would test our implementation of query language and make a verdict about its compliance and list missing details.
Have anyone ever encountered anything like this?
kovalad
(51 rep)
Jul 10, 2017, 11:11 AM
• Last activity: Jul 18, 2017, 11:35 AM
3
votes
3
answers
195
views
Who should own application objects?
So I am just started doing stuff resembling full-blown Oracle DBA work recently, so am still learning much of the basics of the administration part of my job. I have been tasked with bringing multiple application databases up to meet a certain set of professional standards (I'm being intentionally v...
So I am just started doing stuff resembling full-blown Oracle DBA work recently, so am still learning much of the basics of the administration part of my job. I have been tasked with bringing multiple application databases up to meet a certain set of professional standards (I'm being intentionally vague). One of these standards is that owners of objects supporting the application by and large should not also be regularly utilizing them. The major exception being DBA, but then only for certain classes of objects that we would regularly be maintaining such as indexes. Other than when they are doing installation and maintenance tasks, the application object owners should be disabled.
This finally brings me to my question: who should be owning these application objects. Should we just have a user who creates the objects and then is largely disabled for the rest of its life and the requisite usage privileges be given to this or that role? Is it fine for the DBA to own all of the application objects, or should they just own those objects that are being maintained on a daily/weekly basis? Obviously the exact answer will depend the organization's needs and what specifically this standard is asking, but what is the best practice here for application object ownership?
This question may be too broad, so let me know if I need to edit it to make it more specific. I just don't really know where to start with this or even what the right questions to ask are. Thanks for any help you can provide.
Jeff Bauersfeld
(335 rep)
Jun 2, 2017, 06:47 PM
• Last activity: Jun 3, 2017, 12:48 AM
0
votes
1
answers
1570
views
SQL Server - Data Warehouse large log file, how to safely shrink?
My DW database is about 20GB size in data file and nearly 120GB in log file. Data staleness is until midnight (so "yesterday's news") and multiple databases are being replicated daily from production servers. I'm also transforming the data at this time to get more analytical/synthetic/ information i...
My DW database is about 20GB size in data file and nearly 120GB in log file.
Data staleness is until midnight (so "yesterday's news") and multiple databases are being replicated daily from production servers. I'm also transforming the data at this time to get more analytical/synthetic/ information in one place for various consumers (accounts, marketing, management reports), so even complete data loss means about 30 min setback for all
Stored Procs
to re-run and repopulate all tables again. That means, yes, that point in time recovery is not (and will not be) a requirement.
Obviously then, the schema itself is much more valuable than the data and that's what must be protected (and is). Nonetheless, the DB is in full recovery mode and backups are being run daily, in addition to ad-hoc ones when I expect to be "breaking things".
It's possible the Snapshot isolation
I've enabled on it had big impact on the size as well, but ultimately it proved pointless as I'm also using data across 12 databases (for now) and setting them all in similar way is impossible (various compatibility levels; don't want to meddle with that). So happy to switch it off again.
But, having said all that, I can't afford having the database inaccessible for long, as a number of people use it to do their work, even if it's still in early development phase. SSRS runs several dozens of subscriptions off it as well (RS is on a different server, if that's helps). It is also being exposed to a BI software for a number of web users (Tableau ), so if we don't disrupt that, too, it would be good.
Obviously - again - large log files are due to large number of queries being cancelled, erroring out and/or simply crashing. And since I'm the only user who has insert/update/delete permissions, which is mostly done outside of working hours anyway, transactions are committed already for a quite a while.
So, my question is: how do I safely shrink the database to something more compact? And definitely in a "rinse and repeat" pattern for foreseeable future? No point wasting space to backup data I'm not really required to keep.
Additional info: this also may apply to the remaining 12 DBs (replicated from production servers).
Main reason for asking is that I may need the database to be more portable so that I can work on it offline and since other DBs are already quite large space really counts. And I'm not really a DBA, so I'm not sure of the best course here.
EDIT:
After reading linked articles and answers to similar question there is one, additional question that bugs me: If i do a log backup (as only then log files will stop growing) and then delete those backups (as I don't really need them - or do I?) will the log file start growing again? Do I understand correctly that I need to shrink and switch to simple recovery model afterwards to prevent that?
AcePL
(349 rep)
Apr 24, 2017, 02:01 PM
• Last activity: Apr 24, 2017, 04:49 PM
1
votes
2
answers
1811
views
How to set up development servers SQL Server
My .NET development team has been massively restructured recently, and as a result of the restructure, we now have the ability to do things like set up source control and database backups, and development databases. However, none of us is really a DBA, and so we have no clue how to do this on the DB...
My .NET development team has been massively restructured recently, and as a result of the restructure, we now have the ability to do things like set up source control and database backups, and development databases. However, none of us is really a DBA, and so we have no clue how to do this on the DB side.
We would like to have some way to make it so that developers can be working with real data for testing, but are unable to change the data on the production server. Right now, we just have one database, so if we need to test a change to the website, we have to make dummy records to test.
Basically, we know this is a bad setup, but we don't know any better, and we really want to change. I can't find anything online that gives an overview of how to actually set up a development environment which can be used for both application development and database development.
I've looked at https://dba.stackexchange.com/questions/115964/how-to-develop-a-database-workflow and https://dba.stackexchange.com/questions/12557/how-to-setup-local-database-development-process-for-small-web-team , but the first one doesn't address my problem, and the second one assumes a lot of knowledge that I don't have. I don't know anything about "normal" database development practices, rollout/rollback scripts, how to write tests for database development, schema comparison tools, etc. (These are all terms I've seen in my search, but I don't know where to go to learn about them.)
Does anyone know of a place where I can go to learn about this stuff? And/or can anyone give me some advice on how to set up the environment that I don't know how to describe?
My team is very averse to using TFS. I'm not sure why (I've never used it myself), but they like Git, and I know Git, so that is the source control we've decided on. My main question was actually _how_ to set up separate servers for the different environments, while having current data to work with in development.
Kristen Hammack
(187 rep)
Oct 14, 2016, 06:45 PM
• Last activity: Oct 19, 2016, 01:13 AM
2
votes
1
answers
586
views
Logical condition when declaring handler procedure
I have a procedure in MySQL that executes a handler for `SQLEXCEPTION` to read the errno code and insert it to a table. DECLARE EXIT HANDLER FOR SQLEXCEPTION BEGIN GET DIAGNOSTICS CONDITION 1 @sqlstate = RETURNED_SQLSTATE, @errno = MYSQL_ERRNO, @text = MESSAGE_TEXT; I'm expecting something like this...
I have a procedure in MySQL that executes a handler for
SQLEXCEPTION
to read the errno code and insert it to a table.
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
GET DIAGNOSTICS CONDITION 1 @sqlstate = RETURNED_SQLSTATE,
@errno = MYSQL_ERRNO, @text = MESSAGE_TEXT;
I'm expecting something like this
DECLARE EXIT HANDLER FOR SQLEXCEPTION OR SQLWARNING
BEGIN
GET DIAGNOSTICS CONDITION 1 @sqlstate = RETURNED_SQLSTATE,
@errno = MYSQL_ERRNO, @text = MESSAGE_TEXT;
Is there a chance to use OR
or AND
logical operators in sql handler?
Or, can I get some advice to get the WARNING
and ERROR
messages in one DECLARE
line?
tachomi
(313 rep)
May 18, 2016, 10:06 PM
• Last activity: May 25, 2016, 05:22 PM
2
votes
1
answers
1256
views
Laravel MySQL relation difficulties
I have a "users" table: [![enter image description here][1]][2] And an organisations table ("orgs"): [![enter image description here][3]][4] And this is the pivot table: [![enter image description here][5]][5] I have some relations: On the "users" table, I have a function called "orgs()" which is a...
I have a "users" table:
And an organisations table ("orgs"):
And this is the pivot table:
I have some relations:
On the "users" table, I have a function called "orgs()" which is a belongsToMany relation to "orgs" (org_user,org_id,user_id).
On the "orgs" table, I have a function called "users()" which is a hasMany relation to "users" (org_user,org_id_user_id).
Now, in Laravel, when I do:
$orgUser = new OrgUser;
$orgUser->org_id=$org->id;
$orgUser->user_id=$user->id;
$orgUser->role_id=10;
$orgUser->save();
I get the following error:
SQLSTATE: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails (



eamorr
.org_user
, CONSTRAINT org_user_role_id_foreign
FOREIGN KEY (role_id
) REFERENCES roles
(id
)) (SQL: insert into org_user
(org_id
, user_id
, role_id
, updated_at
, created_at
) values (3, 1, -1, 2015-07-22 10:29:04, 2015-07-22 10:29:04))'
I would have thought I could just do something like:
$org->users()->attach(...)
or similar instead of having to reference the "org_user" table directly?
Does anyone have any experience at this?
Here's the User.php model:
namespace App;
use Illuminate\Auth\Authenticatable;
use Illuminate\Contracts\Auth\Authenticatable as AuthenticatableContract;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Eloquent\SoftDeletes;
class User extends Model implements AuthenticatableContract {
protected $table = 'users';
public $timestamps = true;
use Authenticatable;
use SoftDeletes;
protected $dates = ['deleted_at'];
public function orgs()
{
return $this->belongsToMany('\App\Org', 'org_user', 'org_id', 'user_id')->withPivot('role_id');
}
public function files()
{
return $this->hasMany('\App\File', 'user_id');
}
public function clients()
{
return $this->hasMany('\App\User', 'user_user', 'owner_id', 'client_id');
}
public function tags()
{
return $this->hasMany('\App\UserTag', 'user_id');
}
}
And here's the Org.php model:
namespace App;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Eloquent\SoftDeletes;
class Org extends Model {
protected $table = 'orgs';
public $timestamps = true;
use SoftDeletes;
protected $dates = ['deleted_at'];
public function files()
{
return $this->hasMany('\App\File', 'org_id');
}
public function locations()
{
return $this->hasMany('\App\Location', 'org_id');
}
public function tags()
{
return $this->hasMany('\App\OrgTag', 'org_id');
}
public function users()
{
return $this->hasMany('\App\User', 'org_user', 'org_id_user_id')->withPivot('role_id');
}
}
And here's the OrgUser.php model:
namespace App;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Eloquent\SoftDeletes;
class OrgUser extends Model {
protected $table = 'org_user';
public $timestamps = true;
use SoftDeletes;
protected $dates = ['deleted_at'];
}
Eamorr
(125 rep)
Jul 22, 2015, 11:00 AM
• Last activity: Jul 22, 2015, 11:38 AM
3
votes
1
answers
1191
views
Migrate small sets of data from production to staging/sandbox
I have a development environment that consists of a production, staging and development servers that host SharePoint 2010 or 2013 as well as SQL Server 2008 R2. These each need to have relevant data, though there is limited bandwidth and hardware price restrictions that we must abide by. Is there an...
I have a development environment that consists of a production, staging and development servers that host SharePoint 2010 or 2013 as well as SQL Server 2008 R2. These each need to have relevant data, though there is limited bandwidth and hardware price restrictions that we must abide by.
Is there any way we can take a small portion of production data, and transfer it to our staging and development environments? And by small portion, I mean, say, the last month of data. So the storage impacts will be minimal as well as time consumption to do additional backups and/or transfer times. Our production database is a couple terabytes.
**Possible restrictions:**
- Limited bandwidth for transfers between domains (prod -> dev)
- Huge database sizes
- SharePoint: simply keeping the schema and having the last month of updated data will cause a lot of orphaned / corrupted data, so there has to be some kind of seeded data to keep integrity
The overall goal is to reduce the database size of development and staging servers, while being able to capture the most recent data.
*Preferred solutions; scripts or software.*
Christopher
(31 rep)
Jan 9, 2015, 11:36 PM
• Last activity: Jan 20, 2015, 05:45 PM
Showing page 1 of 20 total questions