Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

5 votes
1 answers
1253 views
Handling SQL updates for production deployments
I'm researching for quite a long time now (about two months) and haven't found a good approach for this, so I'll ask you experts out there in the hope of some enlightenment. I run a pretty traditional LAMP web application in the cloud. The source code is VCS'ed with Git, as well as database schema (...
I'm researching for quite a long time now (about two months) and haven't found a good approach for this, so I'll ask you experts out there in the hope of some enlightenment. I run a pretty traditional LAMP web application in the cloud. The source code is VCS'ed with Git, as well as database schema (split SQL files for structure and default data). Deploying a new environment is quite a joy, development or production. Dev environments are deployed with Vagrant plus some nice shell provisioning script I wrote. Production environments are deployed using a subset of the Vagrant provisioning script, which is also very easy to do. When the development cycle imposes a change to the SQL schema (structure or data), things starts to break. In that case, developers changes the core SQL scripts so all the changes gets into the VCS, which is totally fine for brand new environments. The problem sits within already deployed production environments: for those, I have to manually get all the patches for the SQL changes since the revision actually running in all production servers first so I can apply them manually after. This takes a long time to happen, is a quite fragile task and error prone. Initially, I thought I can move the SQL changes away from the core SQL files smaller subsets, letting the core SQL files as a starting point. When there is something to change at SQL structure, I could just tell devs to create a new SQL file with only the changes from that development cycle. So, for instance: - structure_begin.sql: the SQL schema as it is now. |- structure_devcycle1.sql: first set of changes to the structure |- structure_devcycle2.sql: second set of changes to the structure, and so forth... Then, by using Git hooks, I can selectively and automatically deploy them on production. But I don't know if this is a good approach. And here I can ask: 1. Does somebody out there have solved this puzzle? 2. What are the best practices for release management and deploy automation for SQL changes in production environments? 3. Are there any open source tools that can help in this process? (I'm aware of Sqitch, but I'm not quite sure if it is the right one for the job.)
denydias (151 rep)
Nov 14, 2014, 11:29 PM • Last activity: Jun 24, 2025, 07:44 AM
0 votes
2 answers
750 views
Database Change Management
What are ways in which teams handle database change management? I have a team of 8 developers making database changes concurrently. Each developer is responsible for updating their changes in what we consider a "poor man's database change log" which is essentially a shared file referencing a script....
What are ways in which teams handle database change management? I have a team of 8 developers making database changes concurrently. Each developer is responsible for updating their changes in what we consider a "poor man's database change log" which is essentially a shared file referencing a script. Sometimes the scripts are database alters. Other times the script re-creates a view due to a change (which may have also been changed by another developer later on). It always happens that scripts are placed out-of-order, listed in duplicate, etc. My gut tells me we are doing this wrong. How can we do this better? If it matters, we are using Postgres + git.
Jordan Parmer (153 rep)
Feb 4, 2015, 06:08 AM • Last activity: Nov 19, 2017, 09:57 AM
0 votes
2 answers
229 views
How to transfer DDL changes between different stages?
What technical solutions are available to track DDL changes and transfer them between different stages (e.g. DEV -> QA -> PROD) on postgreSQL databases? Is there a common mechanism to do that?
What technical solutions are available to track DDL changes and transfer them between different stages (e.g. DEV -> QA -> PROD) on postgreSQL databases? Is there a common mechanism to do that?
JanDotNet (101 rep)
Jul 23, 2016, 07:39 PM • Last activity: Jul 25, 2016, 02:21 PM
1 votes
0 answers
49 views
How do I organize database initialization so it can be executed in a fixed order?
I am trying to update an established database system to leverage change management. The main point is to make the development and production environments follow the same process. I need help understanding how to group database elements into change sets, and which order to apply them in. In my curren...
I am trying to update an established database system to leverage change management. The main point is to make the development and production environments follow the same process. I need help understanding how to group database elements into change sets, and which order to apply them in. In my current design, change sets are grouped into roles, schemas, objects, and grants. objects is a catch-all for tables, indexes, sequences, views, etc. Here is a sample directory layout: roles/DATE-admin roles/DATE-appA roles/DATE-appB schemas/DATE-schemaA schemas/DATE-schemaB objects/DATE-foo objects/DATE-bar grants/DATE My current design applies any outstanding change set from roles, then schemas, then objects, then grants. The change sets which have not been applied from each group would be applied to the target database, and a helper script would be supplied to engineers to assist database setup. Liquibase tracks applied changes for me. (In production, roles would be managed via control panel, but they exist here for developer convenience.) Is there a better grouping and order of operations that I can use to future-proof this design? Is this rigid order of operations reliably scriptable?
Chris Betti (487 rep)
Jun 3, 2016, 12:00 AM • Last activity: Jun 5, 2016, 08:09 PM
0 votes
1 answers
1299 views
How to test mysql for changes
I'm OCD person, I think I accidentally dropped something in my mysql. Problem is I've around 50 databases, and I cannot manually check everything :( Is there a way to check for these kind of changes? I tried to look at timestamps in /var/lib/mysql but it didn't help, I was looking at directory times...
I'm OCD person, I think I accidentally dropped something in my mysql. Problem is I've around 50 databases, and I cannot manually check everything :( Is there a way to check for these kind of changes? I tried to look at timestamps in /var/lib/mysql but it didn't help, I was looking at directory timestamps. I also ran:
table_schema,table_name,update_time
FROM information_schema.tables
WHERE update_time > (NOW() - INTERVAL 1 day)
it only returned databases changed in last few secs.
user21227
Jun 30, 2015, 06:27 PM • Last activity: Jun 30, 2015, 07:48 PM
2 votes
1 answers
60 views
SELECTs in Database Change Log and Source Control
We are overhauling the way that we store our database in source control and keep a change log of it. I was reading the following article: http://thedailywtf.com/articles/Database-Changes-Done-Right, and in the Short Section of "The Taxonomy of Database Scripts" it describes the three types of script...
We are overhauling the way that we store our database in source control and keep a change log of it. I was reading the following article: http://thedailywtf.com/articles/Database-Changes-Done-Right , and in the Short Section of "The Taxonomy of Database Scripts" it describes the three types of scripts (QUERY, OBJECT, and CHANGE). I like the idea of generalizing scripts into these three categories but I'm wondering about the QUERY type. Questions: - Why would someone want to put a SELECT statement into source control outside of an object? - The database will change afterwards and make the QUERY script unusable, what then? - The data may change returning a different result set, this would defeat purpose of source control, what then? - Would the original result set have to be saved to solve the 2nd and 3rd issues? - What is an example of a SELECT statement that might be put into source control? - Wouldn't a INSERT statement work better and store the results in a table as in baselines? I just can't see the purpose of storing SELECT statements into source control. If there is a purpose could someone please answer the above questions and maybe state the pros and cons of storing a SELECT statement into source control?
Gander7 (197 rep)
Nov 27, 2014, 01:39 AM • Last activity: May 19, 2015, 05:36 PM
7 votes
3 answers
1422 views
Best practices for change management with indexes
Our IT shop is first starting to build a group of DBA's. All of us (myself included) have come over from the application development/architecture world, so the DBA world is still fairly new to us. Along with building a DBA group, we are looking to build change manage procedures and processes (hopefu...
Our IT shop is first starting to build a group of DBA's. All of us (myself included) have come over from the application development/architecture world, so the DBA world is still fairly new to us. Along with building a DBA group, we are looking to build change manage procedures and processes (hopefully based on best practices) for when we need to move changes. I've found the following post which is helpful for mostly trigger, stored procedure, and/or DDL changes. But it doesn't necessarily address indexes or vendor databases. We have a mix of both our own and vendor databases. In our case some of the vendors (though not all) are working with our company to build the database(s) and applications. We are in the process of performance testing our applications now before they "go live". Thus we are analyzing indexes (or the lack thereof) pretty heavily. As we come across indexes that we feel should be made, how do we best deal with change management with regard to these, both for our own databases as well as for any vendors? What do you do in your shop? I'm worried less about tools then about the process. > **EDIT:** So far, I'm appreciating the feedback, comments, and answers for this question. I have noticed that some of the answers are a bit > tool specific. I'm looking for more "agnostic" practices, if that can > be had. > > However if agnostic is not possible, then for tool sets, we use IBM > DB2 LUW (and that actually on AIX) mostly. We have some DB2 on Windows > and DB2 for i (IBM's i5/OS), but we are mostly AIX DB2. We do use > source control, specifically Subversion. > > Again, looking for general best practices, but above is what we use > that would be vendor specific. > > **EDIT:** > ***Current Decision:*** We intend to track our reasoning as well as our changes. So we are going to open an issue in our issue-tracking > software (which in our case is JIRA). Now we can add in documentation > as to what priority the change has, data that backs up what the change > should be, the change, and the results of the change from another > environment where the change was tested. > > We then also intend to keep track of our changes in scripts in SVN > (much like was suggested below). This way we can track what version of > what exists where. This can be recorded in our JIRA issue (and in any > other auditing software we use, ie. pasted links). We can know with > more certainty what change went to what environment and why. We can > then also track if the index was something we added beyond the vendors > implementation or ahead of their implementation, etc.)
Chris Aldrich (4916 rep)
Mar 2, 2012, 03:23 PM • Last activity: Dec 31, 2014, 12:40 PM
1 votes
0 answers
64 views
SQL Server Comparison
I am merging two different versions of a database. I want to add some specific fields from source database to destination, so I used Visual Studio to do that. It does the job quite well but only table level, which means it checks for the table if the table is not available in the destination, it cre...
I am merging two different versions of a database. I want to add some specific fields from source database to destination, so I used Visual Studio to do that. It does the job quite well but only table level, which means it checks for the table if the table is not available in the destination, it creates. But my problem is: In the old database in a specific table I have a Name column with NVARCHAR(50) mean while in the new database table I have same Name column with NVARCHAR(100) datatype. Now I want to ignore this column level comparison only for this specific table field. Visual Studio allows me to do this at the table level only, not at the column level. Can anyone tell me how it would be possible?
Kirk (111 rep)
Nov 4, 2014, 08:27 AM • Last activity: Nov 4, 2014, 01:27 PM
3 votes
2 answers
6363 views
Changing the SQL Server service account - any additional considerations?
I have inherited a few instances that are running under LocalSystem. I need to change this to a domain account as we are changing our backup strategy to go to a network share and then to tape and I need to be able to assign permissions. I know that the recommended way to change the service account i...
I have inherited a few instances that are running under LocalSystem. I need to change this to a domain account as we are changing our backup strategy to go to a network share and then to tape and I need to be able to assign permissions. I know that the recommended way to change the service account is to use SQL Server Configuration Manager. However, is there anything else I need to be cognizant of? How can I be sure that directory permissions will suffice for the new service account? Will Configuration Manager assign proper permissions to folders that contain the data/log/binaries, regardless of where they are?
SomeGuy (2053 rep)
Mar 6, 2014, 08:59 PM • Last activity: May 10, 2014, 07:29 AM
1 votes
1 answers
97 views
Changes to main Database during Transactional Replication
Can you please advise how can I keep slave database up to date while it's in transactional replication with master. I mean what If I change indexes, columns, tables or sthm else in master dabase? What should I do in this case? Should I make this changes to both databases simultaneously or transactio...
Can you please advise how can I keep slave database up to date while it's in transactional replication with master. I mean what If I change indexes, columns, tables or sthm else in master dabase? What should I do in this case? Should I make this changes to both databases simultaneously or transactional article is enough to handle it?
Johnny_D (245 rep)
Dec 11, 2013, 02:01 PM • Last activity: Dec 11, 2013, 05:46 PM
1 votes
1 answers
1078 views
When using a master plus multiple slaves, how do I change over from one server to a brand new one with minimal downtime?
In my particular case, I'm using Heroku Postgres (repackaged Amazon EC2 instances running PostgreSQL), but platform-agnostic answers are also good! [Heroku's instructions](https://devcenter.heroku.com/articles/fast-database-changeovers) describe the process for transfering over a *single* database (...
In my particular case, I'm using Heroku Postgres (repackaged Amazon EC2 instances running PostgreSQL), but platform-agnostic answers are also good! [Heroku's instructions](https://devcenter.heroku.com/articles/fast-database-changeovers) describe the process for transfering over a *single* database (i.e. without slaves): * You have an existing server, A. * Create server B as a follower of A. $ heroku addons:add heroku-postgresql:ronin --follow HEROKU_POSTGRESQL_AUBURN ... Attached as HEROKU_POSTGRESQL_BLUE * Wait for B to be up-to-date with A. * Disable the application so no new writes are made to A. * Wait for B to fully catch up. * Sever the connection between B and A, so B is an independent db. $ heroku pg:unfollow HEROKU_POSTGRESQL_BLUE * Reconfigure the app to use B as your database. $ heroku pg:promote HEROKU_POSTGRESQL_BLUE * Re-enable the application. * Destroy A. $ heroku addons:remove HEROKU_POSTGRESQL_AUBURN This all works fine if you have a single database, but what happens when you have *slaves*? Heroku's docs only say that you'll have to re-create the slaves, without going into further depth. (n.b. Once you've done the process above, you can't immediately create the new slaves. The server needs a few hours before it can be forked/followed.) Now, if your app can handle ditching the slaves entirely, then you can do the following: * disable the slaves * change over the master server * create new slaves * enable the new slaves But if reducing your server farm to just a single server is going to cause the app to crash and burn, you need a way to keep your slaves up and running throughout this whole process! My first thought was to create a server C, following B, even while B is still following A. When switching over, B becomes the new master and C the new slave. But, alas, Heroku (and presumably also PostgreSQL) doesn't support followers-following-followers. **QUESTION:** How do you change over from one whole master/slave set to another set? How do you change over your master without having to run without slaves for an extended period of time? And just in case it's note clear: I'm not asking about *failing over* to an existing slave. I want to upgrade all of the servers to a different-sized EC2 instance. And I want to maintain my master/slave setup throughout the process.
Noach (113 rep)
Oct 14, 2012, 01:53 PM • Last activity: Nov 29, 2013, 08:42 AM
4 votes
1 answers
241 views
What mechanisms do you use to notify third parties of future schema changes?
How do you notify third parties of future SQL Server schema changes other than through external documentation? I've started implementing a deprecation warning extended property on objects that will be going away in the future but wanted to see if anyone had success through another approach.
How do you notify third parties of future SQL Server schema changes other than through external documentation? I've started implementing a deprecation warning extended property on objects that will be going away in the future but wanted to see if anyone had success through another approach.
Michael (43 rep)
Aug 30, 2012, 08:18 AM • Last activity: Sep 24, 2012, 07:16 PM
5 votes
3 answers
4771 views
Storing Changes To Records
Everything I am talking about relate to relational database, specific MySQL. I have a number of tables in a database and for a moderate number of them, I am going to want to store a history of the records values when it changes. I have seen this done in a couple of different ways: 1. One Table/One F...
Everything I am talking about relate to relational database, specific MySQL. I have a number of tables in a database and for a moderate number of them, I am going to want to store a history of the records values when it changes. I have seen this done in a couple of different ways: 1. One Table/One Field - Basically there is one table that store the history of all the table that need history storage. All change are recorded in one field as a text data type. 2. Table Per Table/One Field - Same as above except the each table has its own history table (ie. Projects/ProjectsHistory, Issues/IssuesHistory, etc...). 3. Table Per Table/Field Per Field - This is like the above in the each table has it own histroy table but also the history table has pretty much the same definition as the regular table with an additional of additional history related fields (updateDatetime, updateUserId, etc...). What are some of the advantages and disadvantages to these different methods of storing record history? Are there other methods that I have not thought of?
ryanzec (505 rep)
Mar 27, 2012, 05:40 PM • Last activity: Mar 27, 2012, 08:41 PM
Showing page 1 of 13 total questions