Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
505 views
postgresql materialized views vs versioning
I came across a pickle when dealing with question from one of our dev concerning postgresql and materialized views: can I refresh materialized view with my application user? well according to docs (https://www.postgresql.org/docs/11/sql-creatematerializedview.html), it is simple enough, BUT you must...
I came across a pickle when dealing with question from one of our dev concerning postgresql and materialized views:
can I refresh materialized view with my application user? well according to docs (https://www.postgresql.org/docs/11/sql-creatematerializedview.html) , it is simple enough, BUT you must be owner of said view. We have isolated acls for roles we run migrations under (ddl) and roles that are "application" and thus can operate with limited privileges. So I can't create the view during migration, cause application need to update it once a while and I can't create it in application runtime, because application role doesn't have create privilege. I think someone must have came across similar issue, but search engines are not helping at all. last hope is making application user a little bit more privileged (with create on schema granted).
krszp (11 rep)
Feb 12, 2019, 04:53 PM • Last activity: Apr 21, 2025, 03:04 AM
1 votes
1 answers
209 views
is there a way I can see if a specific patch has been applied to my sql server?
How can I see if this [patch][1] has been applied to my sql server server? I know about [@@version][1], but that shows me the latest patch applied, about about the other patches before that one? [![enter image description here][2]][2] [1]: https://dba.stackexchange.com/a/115799/22336 [2]: https://i....
How can I see if this patch has been applied to my sql server server? I know about @@version , but that shows me the latest patch applied, about about the other patches before that one? enter image description here
Marcello Miorelli (17274 rep)
Jan 31, 2025, 05:22 PM • Last activity: Feb 1, 2025, 02:26 PM
1 votes
0 answers
482 views
Liquibase on Oracle SQLcl - table and index drops not being propagated
I've been using Oracle's SQLcl, specifically the `lb` command to use [Liquibase][1], for the last day or so to use for versioning my development, staging and production databases, which are using Oracle's Autonomous Data Warehouse (version 19). It works fine when doing the tutorial changes, like the...
I've been using Oracle's SQLcl, specifically the lb command to use Liquibase , for the last day or so to use for versioning my development, staging and production databases, which are using Oracle's Autonomous Data Warehouse (version 19). It works fine when doing the tutorial changes, like the ones listed by Jeff Smith in his video on it here , although the syntax of the commands has changed a bit. I'm reaching a problem when I want to drop an index, if I just execute the SQL statement DROP INDEX EXAMPLE_IDX in my development environment and then run the lb genschema statement, it doesn't drop the index on my staging environment when I run lb update -changelog controller.xml. I think this is because it's generating a fresh Liquibase changelog every time I run lb genschema which seems to overwrite the previous changelog. I'm also having the same behaviour when trying to drop tables etc, and I'm not sure how I am supposed to propogate DROP statements on tables and indexes with the lb tool. Here's what I'm doing (users/passwords etc all changed). Open SQLcl to create the staging database's structure (v1)
{plain}
set cloudconfig wallets/dev.zip
connect user/password@dev_medium
lb genschema

set cloudconfig wallets/staging.zip
connect user/password@staging_medium
lb update -changelog controller.xml
Open SQL Developer on the dev database and delete the index.
{sql}
DROP INDEX INDEXNAME;
Open SQLcl to update the changelog and the staging database.
{plain}
set cloudconfig wallets/dev.zip
connect user/password@dev_medium
lb genschema

set cloudconfig wallets/staging.zip
connect user/password@staging_medium
lb update -changelog controller.xml
After doing these steps, I'm still noticing that the tables and indexes are still there, which means my database behaves differently to what it should be doing! If anyone has knowledge or experience of using the lb tool, SQLcl, or can let me know some best practices that would be amazing.
Ash Oldershaw (121 rep)
Jul 1, 2020, 01:36 PM • Last activity: Sep 26, 2023, 03:45 PM
3 votes
1 answers
1696 views
pg_dump - how to split it into directories and files?
I am looking for a way to dump PostgreSQL12 database model into a directory/file structure, instead of a single file - for versioning purposes. I found this old [thread][1], which mentions this exact case and describes a `--split` flag to be used with pg_dump. However, this option does not exist in...
I am looking for a way to dump PostgreSQL12 database model into a directory/file structure, instead of a single file - for versioning purposes. I found this old thread , which mentions this exact case and describes a --split flag to be used with pg_dump. However, this option does not exist in the current pg_dump for PostgreSQL12. I also tried the option --format=directory but that's not it. How do I achieve this effect?
Leon Powałka (161 rep)
Mar 24, 2021, 12:07 PM • Last activity: Sep 11, 2023, 12:08 AM
1 votes
1 answers
1823 views
How to deal with fact table data that needs to be version controlled?
I have the following simplified `sport_match` 'fact' table: | match_id | tournament_id | player_id_p1 | player_id_p2 | p1_final_score | p2_final_score | |----------|---------------|--------------|--------------|----------------|----------------| | 1 | 1 | 1 | 2 | 1 | 0 | | 2 | 1 | 1 | 2 | 3 | 1 | |...
I have the following simplified sport_match 'fact' table: | match_id | tournament_id | player_id_p1 | player_id_p2 | p1_final_score | p2_final_score | |----------|---------------|--------------|--------------|----------------|----------------| | 1 | 1 | 1 | 2 | 1 | 0 | | 2 | 1 | 1 | 2 | 3 | 1 | | 3 | 2 | 3 | 2 | 2 | 3 | | 4 | 2 | 3 | 2 | 4 | 0 | The table is updated from an API that issues INSERT, UPDATE and DELETE SQL instructions via text files. Occasionally there is a mistake in the scores and because I need to be able to run historical analyses from a specific point in time I need to capture the incorrect entry and the correct entry. For this reason I started to look at adopting a Slowly Changing Dimension Type 2 method and translating all the API instructions to INSERT. This would give me a table that looked like this: | match_key | match_id | tournament_id | player_id_p1 | player_id_p2 | p1_final_score | p2_final_score | start_date | current_flag | |-----------|----------|---------------|--------------|--------------|----------------|----------------|------------------|--------------| | 1 | 1 | 1 | 1 | 2 | 1 | 0 | 01/01/2000 00:00 | Y | | 2 | 2 | 1 | 1 | 2 | 3 | 1 | 02/01/2000 00:00 | Y | | 3 | 3 | 2 | 3 | 2 | 2 | 3 | 03/01/2000 00:00 | Y | | 4 | 4 | 2 | 3 | 2 | 4 | 0 | 04/01/2000 00:00 | N | | 5 | 4 | 2 | 3 | 2 | 4 | 1 | 04/01/2000 00:01 | Y | However, I realised I was applying a 'dimension' principle to a 'fact' table. Is this a viable approach or should I be looking at a different design?
Jossy (83 rep)
Jun 6, 2022, 08:28 PM • Last activity: Jun 6, 2022, 09:04 PM
0 votes
1 answers
43 views
Do I have the right Slowly Changing Dimensions type for my version controlled tennis match database?
I'm trying to version control my database using the principles of [Slowly Changing Dimensions][1]. I've opted to use Type 2 with a generation `start` and `end` column instead of datetimes. In a simplified example I have three tables: **player:** | player_key | player_id | country_id | start | end |...
I'm trying to version control my database using the principles of Slowly Changing Dimensions . I've opted to use Type 2 with a generation start and end column instead of datetimes. In a simplified example I have three tables: **player:** | player_key | player_id | country_id | start | end | |------------|-----------|------------|-------|-----| | 1 | 1 | 1 | 1 | 2 | | 2 | 2 | 2 | 1 | | | 3 | 1 | 3 | 2 | | **tournament:** | tournament_key | tournament_id | surface_id | start | end | |----------------|---------------|------------|-------|-----| | 1 | 1 | 1 | 1 | 2 | | 2 | 1 | 2 | 2 | | **tennis_match:** | match_id | tournament_key | player_key_p1 | player_key_p2 | start | end | |----------|----------------|---------------|---------------|-------|-----| | 1 | 1 | 1 | 2 | 1 | | | 2 | 1 | 1 | 2 | 1 | | | 3 | 2 | 3 | 2 | 2 | | | 4 | 2 | 3 | 2 | 2 | | I now want to extract all the matches and their respective tournament and player data to run some analysis on it. If I run the following query:
SELECT 
    match_id,
    tournament_key,
    player_key_p1,
    player_key_p2,
    t.surface_id,
    p1.country_id,
    p2.country_id
FROM
    tennis_match AS m
        JOIN
    player AS p1 ON p1.player_key = m.player_key_p1
        JOIN
    player AS p1 ON p1.player_key = m.player_key_p1
        JOIN
    tournament AS t ON t.tournament_key = m.tournament_key
This gives me: | match_id | tournament_key | player_key_p1 | player_key_p2 | surface_id | p1_country_id | p1_country_id | |----------|----------------|---------------|---------------|------------|---------------|---------------| | 1 | 1 | 1 | 2 | 1 | 1 | 2 | | 2 | 1 | 1 | 2 | 1 | 1 | 2 | | 3 | 2 | 3 | 2 | 2 | 3 | 2 | | 4 | 2 | 3 | 2 | 2 | 3 | 2 | The issue I'm facing is that the surface_id and p1_country_id change part way through the matches because, well, they changed part way through the matches. However, for the purposes of my analysis at match_id = 4 I should be using the values of the latest versions of player and tournament: | match_id | tournament_key | player_key_p1 | player_key_p2 | surface_id | p1_country_id | p1_country_id | |----------|----------------|---------------|---------------|------------|---------------|---------------| | 1 | 1 | 1 | 3 | 2 | 3 | 2 | | 2 | 1 | 1 | 3 | 2 | 3 | 2 | | 3 | 2 | 2 | 3 | 2 | 3 | 2 | | 4 | 2 | 2 | 3 | 2 | 3 | 2 | So I figure that to get the data in the format I need then I'm going to need to write some reasonable complex queries (for me) to get the data in a format I want. This has got me questioning whether I have the right structure. If I'd gone for a Type 4 approach then my queries on the non-history tables would be nice and simple. However, if I wanted to run an analysis from a point in the past I'd have to head to the history table and I reckon I'd have the same challenge as I have now. Plus I'd have the added hassle of managing history tables and having to figure out a solution for deleted records. I did look at Type 6 but this looked like I needed to duplicate version controlled columns - one to have a current_state and historic_state. As some of the version controlled tables have hundreds of columns this didn't seem like the right approach either so I didn't review it much further. Finally getting to my question... do I have the right data structure and just need to knuckle down on query writing or could I implement a better design?
Jossy (83 rep)
Jun 4, 2022, 09:26 PM • Last activity: Jun 6, 2022, 07:42 PM
2 votes
1 answers
211 views
Version control for Oracle databases
I'm setting up a version control for several databases. There are some objects like functions/procedures/packages that appear in all the databases, that is, each such object has an identical instance in each database. In the current solution, all these objects are just (re)deployed in every database...
I'm setting up a version control for several databases. There are some objects like functions/procedures/packages that appear in all the databases, that is, each such object has an identical instance in each database. In the current solution, all these objects are just (re)deployed in every database separately. But for sure there must be a better way to make the version control run more automatically by using the fact that all these databases have (copies of) the same objects. I will be very thankful for any clue!
hajduk (21 rep)
Feb 19, 2022, 12:10 PM • Last activity: Feb 20, 2022, 03:58 PM
2 votes
2 answers
7860 views
Can I use GIT directly within Oracle
I'm familiar with Git, I use it every day for web projects. But I'm trying to use it with an Oracle DB, mainly to track different procedures and packages. This Oracle DB lives on a different server, not my local machine, and I am not the DBA. Is it still possible for me to configure a solution to tr...
I'm familiar with Git, I use it every day for web projects. But I'm trying to use it with an Oracle DB, mainly to track different procedures and packages. This Oracle DB lives on a different server, not my local machine, and I am not the DBA. Is it still possible for me to configure a solution to track this information, and if so, how? I wouldn't have the first clue as to what folder I would initialize as the repo.
Jonnny (123 rep)
Nov 17, 2016, 06:20 PM • Last activity: Nov 9, 2021, 07:02 AM
1 votes
3 answers
3491 views
DB2, How to save all stored procedures as files
How does one extract all stored procedures from a DB2 schema as individual files (without using Data Studio so it can be scripted). I need them all so I can uploaded to source control. I have found a way to do it via command line if one has access to the server, but do not have server access.
How does one extract all stored procedures from a DB2 schema as individual files (without using Data Studio so it can be scripted). I need them all so I can uploaded to source control. I have found a way to do it via command line if one has access to the server, but do not have server access.
Jake v1 (73 rep)
Jan 14, 2021, 01:58 PM • Last activity: Jul 11, 2021, 06:04 AM
6 votes
3 answers
2582 views
Building a branched versioning model for relational databases
I am database designer and at my current project I'm implementing versioning capabilities required to concurrently edit rows of data in RDBMS. The project requirements says, that data editing sessions can go on for several hours or days until performing commit. Also, conflicts are arising during sim...
I am database designer and at my current project I'm implementing versioning capabilities required to concurrently edit rows of data in RDBMS. The project requirements says, that data editing sessions can go on for several hours or days until performing commit. Also, conflicts are arising during simultaneous modifying of the same data by different users should be handled with possibility of manual and semi-automatic resolution. In other words, desired editing workflow is similar to one used in document-oriented version control systems, such as SVN or Git. Therefore, traditional OLTP approaches and conflict resolution strategies (MVCC, optimistic/pessimistic locks) doesn't satisfy my constraints. I have done some observation of existing tools, that offer possibilities for branched version history and multiversion workflow: - ArcSDE - ESRI's ArcGIS supports versioning for geodatabases through ArcSDE data layer; - Oracle Workspace Manager - feature of Oracle Database, providing high degree of version isolation and data history management; - SQL:2011 temporal features, including valid time and transactional time support. SQL:2011 doesn't solve my problem, as it offers support for "linear" history of edits, not branched I'm looking for. Solutions from ESRI and Oracle are good candidates, but I'm disappointed that both have vendor-specific interfaces for manipulating versions. It seems that at this moment nobody can offer industry standard solution for branched versioning of relational data (as SQL:2011 does for temporal tables and linear version history). As a newcoming database researcher, i want to understand: - is relational database community interested in developing standard models of branched data versioning and will any contribution or research in this area be valuable? (for example, standartization as it was done for temporal features in SQL2011 in the form of language improvements) - do developers and database designers lack for database-independent open-source middleware (similar to ArcSDE), that offers support for branched version management of relational data or it would be better to introduce such features in RDBMS itself? I think I can try to dig deeper and propose some standard model or sublanguage to deal with Git-like versioning, but i don't know where to start.
Nipheris (161 rep)
Aug 17, 2014, 11:14 PM • Last activity: Sep 23, 2020, 07:30 AM
1 votes
0 answers
197 views
Integrating Flyway and Github
I was wondering if any of you can share some experience regarding integrating Flyway and Git. We're currently developing a project, each sprint we need to make database changes for new features of course. Once every two weeks we merge changes made in dev DB to prod DB (the code reside on different b...
I was wondering if any of you can share some experience regarding integrating Flyway and Git. We're currently developing a project, each sprint we need to make database changes for new features of course. Once every two weeks we merge changes made in dev DB to prod DB (the code reside on different branches in Git and we merge every two weeks). We want to start using Flyway for DB version control. Right now the idea is to create sql scripts on dev branch when we need a change, and run migrate on dev for every change we need. Then every two weeks when merge happens, the sql scripts will be merged to prod branch and then be run (using flyway migrate) using pipeline. Does that sound like a good approach? Can anyone share some experience about it or suggest other approaches?
browsingThrough91 (11 rep)
Jan 12, 2020, 06:59 AM
9 votes
4 answers
4674 views
How to version control PostgreSQL schema with comments?
I version control most of my work with [Git][1]: code, documentation, system configuration. I am able to do that because all my valuable work is stored as text files. I have also been writing and dealing with lot of SQL schema for our Postgres database. The schema includes views, SQL functions, and...
I version control most of my work with Git : code, documentation, system configuration. I am able to do that because all my valuable work is stored as text files. I have also been writing and dealing with lot of SQL schema for our Postgres database. The schema includes views, SQL functions, and we will be writing Postgres functions in R programing language (via PL/R ). I was trying to copy and past the chunks schema that I and my collaborators write but I forget to do that. The copy and past action is repetitive and error prone. The pg_dump / pg_restore method will not work because it looses comments. Ideally I would like to have some way to extract my current schema into a file or files and preserve the comments so that I can do version control. What is the best practice to version control schema with comments?
Aleksandr Levchuk (1227 rep)
Feb 27, 2011, 07:23 AM • Last activity: Jan 9, 2019, 12:37 AM
8 votes
3 answers
11751 views
How do I save functions to individual files in PostgreSQL?
I maintain a legacy application that uses a PostgreSQL database. The application is heavily dependent on stored procedures (aka functions). I want to save these functions to files named after the function name so I can then use a VCS (version control system). I know that I can save the code with the...
I maintain a legacy application that uses a PostgreSQL database. The application is heavily dependent on stored procedures (aka functions). I want to save these functions to files named after the function name so I can then use a VCS (version control system). I know that I can save the code with the ALTER FUNCTION using PgAdmin but this only allows me to save one function at a time. I am looking for a way to save all the functions automatically. Is there any way to script this task?
Fernando (221 rep)
Sep 17, 2012, 11:51 PM • Last activity: Dec 19, 2018, 08:03 AM
0 votes
1 answers
382 views
MySQL MSI appears to be installing wrong (latest) MySQL version
For compatibility reasons with other software, I'm trying to install MySQL 5.6.40 on Windows 10. Any version from 5.7 onward is incompatible with my other programs. I've been trying to use the MSI installer but - no matter what version of 5.6 I try to install - the only server option I get is 8.0. T...
For compatibility reasons with other software, I'm trying to install MySQL 5.6.40 on Windows 10. Any version from 5.7 onward is incompatible with my other programs. I've been trying to use the MSI installer but - no matter what version of 5.6 I try to install - the only server option I get is 8.0. The installer has the correction version in its name (e.g. (mysql-installer-web-community-5.6.41.0.msi) but when I run it, I only get version 8.0.11 appearing for installation. I've gone into the Add screen, and there's no other versions showing under server. There doesn't seem to be the ability to point to a different version during installation. I get the same problem if I try the normal download site or the archive site . I would prefer to use the MSI installer rather than downloading the zip files, as I am not confident with installing a database from a zip. I've been searching to see if this has been reported by others, and I haven't found any comments on this. I would appreciate any help.
Michelle (103 rep)
Aug 30, 2018, 02:07 AM • Last activity: Aug 30, 2018, 09:38 AM
4 votes
1 answers
3610 views
Howto create useful dacpac versioning along with SSDT deployment?
It took me almost one day to go through lots of articles and blogs to realize check-in driven continuous integration with SQL Server Database Projects (SSDT) using TFS and msbuild. Now once this is working properly, I would like to introduce versioning. The database project's dacpac properties allow...
It took me almost one day to go through lots of articles and blogs to realize check-in driven continuous integration with SQL Server Database Projects (SSDT) using TFS and msbuild. Now once this is working properly, I would like to introduce versioning. The database project's dacpac properties allow us to fill in a version in the format "x.x.x.x" that is getting published with the dacpac to Sql Server and can be queried there using: select * from msdb.dbo.sysdac_instances_internal This is nice but I wonder how to create something more useful and practical than a manually editable text for the version number? Of course the version number will never change as long it's up to developers to adjust it in the database project's properties... Remembering how assembly version for example in c# project works, there is some magic increment possible by defining something like > "1.2.*.*" including the build number. How can this be done?
Magier (4827 rep)
Mar 2, 2016, 01:08 PM • Last activity: Apr 20, 2018, 11:19 AM
1 votes
4 answers
1336 views
SQL Server Agent Job - Adding Version Number
I would like to know how to add a version number to a SQL Server Agent job without using the description field, and with it being an attribute of the job. A similar question was asked at StackOverflow [Sql Server Agent Job - Adding Version Number](https://stackoverflow.com/questions/15411316/sql-ser...
I would like to know how to add a version number to a SQL Server Agent job without using the description field, and with it being an attribute of the job. A similar question was asked at StackOverflow [Sql Server Agent Job - Adding Version Number](https://stackoverflow.com/questions/15411316/sql-server-agent-job-adding-version-number/48627860#48627860) but that question did not include the criteria of not using the description field. (*accepted answer implies the description field was viable solution*) I have a script to update jobs, and I want to capture version information without overwriting any existing descriptions and being able to search for current version on a single field (*without combing existing/new comments with version info*) I can use [sp_update_job](https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-update-job-transact-sql) to update most fields, but the only one not in use that will take strings is @category_name and it is limited to values in sp_help_category. (*Edit following day >*) It can be updated with [sp_add_category](https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-add-category-transact-sql) but that presents the value in the GUI drop down as available for all jobs. Possible, but suboptimal. I can use [sp_update_job](https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-update-jobstep-transact-sql) this would be suboptimal as steps are parts of jobs. I don't see any reasonable solutions in there. I did consider creating a step named "Version 1.0.0" or similar but that was wrong on many levels. **EDIT** After much research and testing it became clear that this was the optimal approach. You can **not** use sp_addextendedproperty to hold a version number in a job. Doing so would require changing the value of 'level1_object_type' to 'JOB' which is not an option. [Source](https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-addextendedproperty-transact-sql) I can use a table to list modifications. But that would not be an attribute of the job, and is subject to human error insertion. Possibly I might use a table, where a hash of the command field (@command) and/or schedule is used as a unique identifier. This would/should be unique to job version, while not a direct attribute it would be a derived attribute. Solution to apply to SQL Server 2008R2 and later by preference, SQL Server 2012 and later by requirement.
James Jenkins (6318 rep)
Feb 5, 2018, 06:42 PM • Last activity: Feb 26, 2018, 12:18 PM
68 votes
5 answers
12342 views
How can a group track database schema changes?
What version control methodologies help teams of people track database schema changes?
What version control methodologies help teams of people track database schema changes?
Toby (1128 rep)
Jan 3, 2011, 08:46 PM • Last activity: Aug 16, 2017, 09:08 AM
33 votes
9 answers
39852 views
How do you version your Oracle database changes?
I am interested to know what methods other people are using to keep track of changes made to the database including table definition changes, new objects, packages changes, etc. Do you use flat files with an external version control system? Triggers? Other software?
I am interested to know what methods other people are using to keep track of changes made to the database including table definition changes, new objects, packages changes, etc. Do you use flat files with an external version control system? Triggers? Other software?
Leigh Riffel (23884 rep)
Jan 5, 2011, 03:08 PM • Last activity: Jun 6, 2017, 04:29 AM
2 votes
3 answers
121 views
Profiling feature for SQL Server 2012
I'm using SQL Server 2012 to develop php/web applications. On my development machine, I have SQL Server also. The php codebase is version controlled with git. The various applications we have (and there's more than 30) are in the process of being updated. As this happens, the database structure some...
I'm using SQL Server 2012 to develop php/web applications. On my development machine, I have SQL Server also. The php codebase is version controlled with git. The various applications we have (and there's more than 30) are in the process of being updated. As this happens, the database structure sometimes changes, and the data (even on my local content) changes a lot too. Even as we develop new code, we need to work fixes and amends into the old code as well. We use branches to develop this within git and that works well. However: maintaining a database set to match these is a serious hassle. If a project connects to three databases, and for example, there are three branches, the development becomes confusing, as well as a hassle to set up. So - on to my questions: * Is there a database version of "profiling"? * Is it possible to have parallel versions of the databases that can be switched to active/inactive - just as switching a branch separates what amounts to the same code? * Switching a branch in git is a matter of seconds. Is there a tool to do the same within parallel versions of the current db structure? This would save me some serious headaches (and the associated dangers of leaving a db-config pointing to the wrong database). Any advice or tips to pointers to approach this problem would be appreciated.
elb98rm (133 rep)
Jan 14, 2014, 02:33 PM • Last activity: Dec 29, 2016, 03:05 PM
1 votes
0 answers
222 views
Unable to deploy single SSIS package into SSISDB when referencing parameters
I'm attempting to deploy a single package into an Integration Services Catalog using the project deployment model in SQL 2016. This however is giving me some grief, as I've configured environmental variables, and am referencing these in project variables. It seems the package requires the project to...
I'm attempting to deploy a single package into an Integration Services Catalog using the project deployment model in SQL 2016. This however is giving me some grief, as I've configured environmental variables, and am referencing these in project variables. It seems the package requires the project to be able to validate its' variables. This error is received when attemtping to deploy a single package Any thoughts on if this is possible? It is looking tougher to control promotions to Production when we have to deploy every package on every occasion.
ozMoses (21 rep)
Oct 20, 2016, 04:37 AM
Showing page 1 of 20 total questions