Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
434 views
Validating SQL Server Change Tracking Synchronization
I have some tables that I need to synchronize from one SQL Server database to another. I'm using SQL Server's "Change Tracking" feature. My program uses Change Tracking to synchronize the latest changes and then checks to verify that the changes were synchronized correctly. The way that it goes abou...
I have some tables that I need to synchronize from one SQL Server database to another. I'm using SQL Server's "Change Tracking" feature. My program uses Change Tracking to synchronize the latest changes and then checks to verify that the changes were synchronized correctly. The way that it goes about this is: 1. It fetches the current change tracking version at the source together with some statistics about the source table that will later be used to verify that things synched correctly. The query looks like:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; 
BEGIN TRANSACTION; 
WITH T AS (
    SELECT  [NumericValue] AS [NumericValue]
    FROM [SourceTable]
) 
SELECT 
    AVG(CONVERT([NumericValue])) AS Avg, 
    SUM(CONVERT([NumericValue])) AS Sum,
    COUNT(*) AS Count, 
    CHANGE_TRACKING_CURRENT_VERSION() AS CTVersion 
FROM T; 
COMMIT TRANSACTION;
2. It fetches all changes between the last version it synchronized to and the source version fetched in step 1. The query to fetch the changes in the desired version range looks like:
SELECT SYS_CHANGE_OPERATION, SYS_CHANGE_VERSION, [SYSKEY_PrimaryKey], [PrimaryKey], [NumericValue]
FROM (
    SELECT
        T.*,
        CT.SYS_CHANGE_OPERATION,
        CASE WHEN CT.SYS_CHANGE_OPERATION = 'I' THEN CT.SYS_CHANGE_CREATION_VERSION ELSE CT.SYS_CHANGE_VERSION END AS SYS_CHANGE_VERSION,
        [CT].[PrimaryKey] AS [SYSKEY_PrimaryKey]
    FROM CHANGETABLE(CHANGES [SourceTable], @startVersion) AS CT
        LEFT JOIN [SourceTable] AS T ON T.[PrimaryKey] = CT.[PrimaryKey]
    
    WHERE CT.SYS_CHANGE_OPERATION = 'D' 
        OR (T.[PrimaryKey] = CT.[PrimaryKey]) 
        AND 
            CASE 
                WHEN CT.SYS_CHANGE_OPERATION = 'I' THEN 
                    CT.SYS_CHANGE_CREATION_VERSION 
                ELSE CT.SYS_CHANGE_VERSION 
            END <= @endVersion
) AS [SourceTable]
3. It gets the same statistics from the destination table that it did in step 1. The query looks like:
SELECT 
    AVG(CONVERT([NumericValue])) AS Avg, 
    SUM(CONVERT([NumericValue])) AS Sum,
    COUNT(*) AS Count
FROM [DestinationTable]
4. It compares the statistics from steps 1 and 2. If they match then all is good, if not then it knows there was a problem with the synchronization The problem I'm having is that it is intermittently detecting that the values don't match on tables that experience very rapid updates. Investigating, I have found no evidence of anything actually going wrong in the synchronization process, but the destination seems to lag slightly behind where it needs to be to match the source. When the source table updates slow down, the destination always catches up and matches the source. It seems at this point that the problem stems from the version fetched in step 1 not actually matching the statistics that are fetched at the same time. This seems to indicate that the increment of the change tracking version is not performed atomically together with the actual changes to the table. Unfortunately I haven't found anything in the documentation that explicitly confirms or denies this theory, which makes it difficult to know whether there's something else I need to be looking for. Additionally, if it turns out to be true that the change tracking version and the table state aren't guaranteed to be consistent then I don't know how it would be possible for me to accurately perform validation after synchronization. Any help clarifying whether it's expected for the version number to lag behind the actual state of the table, or suggestions of techniques I could use to get around this problem would be much appreciated.
Davide De Simone (11 rep)
Aug 7, 2020, 05:26 AM • Last activity: Mar 20, 2025, 12:07 AM
2 votes
2 answers
696 views
Post migration MySQL database validation
We are migrating from 5.5 enterprise mysql to 8.0 mysql enterprise edition. And we are migrating version by version like from 5.5 to 5.6 then 5.6 to 5.7 and lastly from 5.7 to 8.0 also os migration is there from the rhel5. 4 to 8.4 it's all okay but client want data validation value by value. Normal...
We are migrating from 5.5 enterprise mysql to 8.0 mysql enterprise edition. And we are migrating version by version like from 5.5 to 5.6 then 5.6 to 5.7 and lastly from 5.7 to 8.0 also os migration is there from the rhel5. 4 to 8.4 it's all okay but client want data validation value by value. Normal data validations like row counts, and checksum is not sufficient to client as data is senstive. Suggest any good tool, script or query to check value to value data validation
Sameer Bille (21 rep)
Mar 30, 2022, 11:11 AM • Last activity: Mar 6, 2025, 03:03 AM
0 votes
1 answers
54 views
I should let the user type the primary key?
I have a Products table, 2 options: 1. Let the user type the PK because some users can input Barcodes or no (in this case, the app will let to user free to input anything for example his own code conventions like Oreo Icecream IC-O) obviously, i need to add some validations to avoid PK with thousend...
I have a Products table, 2 options: 1. Let the user type the PK because some users can input Barcodes or no (in this case, the app will let to user free to input anything for example his own code conventions like Oreo Icecream IC-O) obviously, i need to add some validations to avoid PK with thousends of characteres and more. 2. Use the autoincrement PK and create another field for a "SecondaryCode" but for user this will be the primary key, some validations for SecondaryCode here too and this feels weird using a SecondaryCode like a PK. Is there any very bad reason not to choose the first option? Theorically is good and more easy to manage but i don't feel comfortable exposing something important as a primary key. Maybe I'm missing something obvious.
Lund (3 rep)
Jan 24, 2025, 02:21 AM • Last activity: Jan 24, 2025, 04:11 AM
19 votes
2 answers
30237 views
How do I store phone numbers in PostgreSQL?
Let's assume I want to store phone numbers in a database. I may accept phone numbers from outside of the United States. How would I go about storing these phone numbers?
Let's assume I want to store phone numbers in a database. I may accept phone numbers from outside of the United States. How would I go about storing these phone numbers?
Evan Carroll (65502 rep)
Feb 18, 2017, 03:58 PM • Last activity: Jul 14, 2024, 05:10 PM
0 votes
0 answers
139 views
Data validation for RDS PostgreSQL migration
I'm planning to replace a bunch of RDS PostgreSQL databases using [this method defined by AWS][1]. [![Migration method][2]][2] The DMS task will run will be CDC-only. I came up with data validation requirements for the migration and am open to feedback. # Requirements 1. If a table **has a PK**, val...
I'm planning to replace a bunch of RDS PostgreSQL databases using this method defined by AWS . Migration method The DMS task will run will be CDC-only. I came up with data validation requirements for the migration and am open to feedback. # Requirements 1. If a table **has a PK**, validate it using AWS DMS. 2. If a table has a column that would be considered LOB, such as JSON, validate the column by hashing specific column on the source and target, then diffing the sums (method A). 3. If a table doesn’t have a primary key but has a column that is effectively unique, but not declared as UNIQUE in the schema, validate the table using WbDataDiff (method B). The effectively unique column will be specified as the alternate key (--alternateKey). 4. If a table doesn’t have a primary key and has no unique columns, validate the table by hashing a set of columns with highly variable data on the source and target, then diffing the sums (method A). # Validation Methods A. Hash a single column or set of columns with highly variable data and diff. - This method can validate LOB migration, e.g. JSON columns. - This method can validate tables that don't have a primary key or any effectively UNIQUE column. - SQL below came from this source .
SELECT
    -- Prepends an x the extracted first 8 characters of the hash
    -- (generates a hash string; the purpose of the x is so that postgres interprets them as hex strings when casting to a number)
    -- then converts the string to a 32 bit int
    -- Finally, all of the ints are summed
    sum(('x' || SUBSTRING(hash, 1, 8)) :: BIT(32) :: BIGINT) as sum1,
    -- Does the same as the column above, but with the next 8 characters
    sum(('x' || SUBSTRING(hash, 9, 8)) :: BIT(32) :: BIGINT) as sum2,
    sum(('x' || SUBSTRING(hash, 17, 8)) :: BIT(32) :: BIGINT) as sum3,
    sum(('x' || SUBSTRING(hash, 25, 8)) :: BIT(32) :: BIGINT) as sum4
FROM
    (
        SELECT
            md5 (
                -- When the column value is null, use a space
                COALESCE(md5('address' :: TEXT), ' ') ||
                COALESCE(md5('date'::TEXT), ' ')
            ) AS hash
        FROM
            example_table
    ) AS t;
B. Use SQLWorkbench/J’s WbDataDiff data comparison tool to validate tables that don't have a PK or UNIQUE key. This method can show exact differences in the data between the source and target. - The tool accepts an alternative key. The alternative key does not need to be declared as UNIQUE in the schema.
WbDataDiff -referenceProfile="source"
           -targetProfile="target"
           -referenceTables=lerg1
           -file=lerg1_diff.sql
           -includeDelete=true
           -alternateKey='lerg1=ocn_no'
           -excludeRealPK=true
           -showProgress=true
Trouble Bucket (159 rep)
Sep 11, 2023, 08:38 PM
0 votes
0 answers
411 views
Validate Excel File data source in SSIS package
I need to load a number of Excel files using SSIS package into a SQL Server table. I am able to successfully loop through and load my Excel files into my table I do not have any issue doing that, BUT as part of the task, I need to (validate)make sure, the files have a consistent format for all of re...
I need to load a number of Excel files using SSIS package into a SQL Server table. I am able to successfully loop through and load my Excel files into my table I do not have any issue doing that, BUT as part of the task, I need to (validate)make sure, the files have a consistent format for all of received Excel files, things I need to check are: + Number of columns should be the same. + Data type for each column should be the same. + Certain columns should not have null/empty values. What do you suggest as the best way (simplicity and performance) to do this? Is using Script task / C# a good way? (the question is only about validating the Excel file format and columns data type)
Alex ghulami (1 rep)
Aug 8, 2023, 11:43 PM
0 votes
0 answers
103 views
Oracle Database schema validation
I am working on a database migration and want to validate the destination schema against the source schema for Tables, Views, Indexes, Sequences, Constraints etc. I can do that by querying the dba_ tables individually. This is time consuming and we have to record the results manually I was wondering...
I am working on a database migration and want to validate the destination schema against the source schema for Tables, Views, Indexes, Sequences, Constraints etc. I can do that by querying the dba_ tables individually. This is time consuming and we have to record the results manually I was wondering if there is any pl/sql code which can be used to perform all these validations at once and give an output which can be stored in a CSV/excel
JuniorDeveloper (1 rep)
Jun 27, 2023, 08:18 AM • Last activity: Jun 27, 2023, 08:21 AM
0 votes
1 answers
181 views
Best practices for tracking ingested data sources
I am in the process of creating an ingestion pipeline wherein there exists a step of periodically reading new .csv files and storing them into a postgres database. This step is functioning, however it is currently impractical/time-consuming to verify if the data in any certain file has been fully an...
I am in the process of creating an ingestion pipeline wherein there exists a step of periodically reading new .csv files and storing them into a postgres database. This step is functioning, however it is currently impractical/time-consuming to verify if the data in any certain file has been fully and correctly ingested into the database. I am essentially operating in blind trust that the database is a point of truth, but I would like to be able to be a little more certain. The first step I was planning to take is to store runtime metadata during ingestion jobs (e.g. filename, time of ingestion, job result) in its own table in the database. While this won't speak to the _integrity_ of the data, it would at least allow some insight into what has been processed. Any guidance on best practices and what else I can do re: data validation for a setup like this would be greatly appreciated!
SIRCAVEC (101 rep)
Oct 31, 2022, 07:31 PM • Last activity: Nov 1, 2022, 02:11 AM
1 votes
3 answers
8305 views
Validating the GST Identification Number (GSTIN)
The following image describes the format of the GST Identification Number: [![enter image description here][1]][1] - The first 2 digits denote the unique *State Code* in accordance with the Indian Census 2011. For instance, the *State Code* of New Delhi is '07' and that of Karnataka is '29'. - The n...
The following image describes the format of the GST Identification Number: enter image description here - The first 2 digits denote the unique *State Code* in accordance with the Indian Census 2011. For instance, the *State Code* of New Delhi is '07' and that of Karnataka is '29'. - The next 10 characters denote the *PAN* (*Permanent Account Number*) of the taxpayer. - The 13th digit denotes the *Registration Number* (or *Entity Number*) of the tax payer with the same *PAN*. - The 14th digit is 'Z' by default for all – not intending anything currently. - The 15th digit is the *Check sum digit* – can be a number or an alphabetical character. Perhaps I can carry out the validation by means of a PATINDEX or a Regex?
Sunil Patil (113 rep)
Apr 11, 2018, 11:48 AM • Last activity: Apr 15, 2022, 10:53 AM
2 votes
1 answers
10044 views
SQL Server - MERGE duplicate SOURCE rows validation is only evaluated on UPDATE and DELETE clauses
I *want* SQL Server to throw an error at me if my `SOURCE` table contains duplicates as per the `ON` clause in *every* case, including `INSERT`, but it only does for the `UPDATE` and `DELETE` cases. **Why doesn't it?** For example, it takes two executions of this statement to get any sort of validat...
I *want* SQL Server to throw an error at me if my SOURCE table contains duplicates as per the ON clause in *every* case, including INSERT, but it only does for the UPDATE and DELETE cases. **Why doesn't it?** For example, it takes two executions of this statement to get any sort of validation error. First, let's define the table: --assume this is a permanent table that starts empty: CREATE TABLE #MyTable(KeyID1 INT, KeyID2 INT, SomeValue REAL)--no primary key on purpose for example And the MERGE statement I wish would throw the error at me on the first execution: MERGE #MyTable AS TARGET USING ( SELECT KeyID1 = 1 ,KeyID2 = 1 ,SomeValue = 1 UNION ALL SELECT KeyID1 = 1 ,KeyID2 = 1 ,SomeValue = 2 ) AS SOURCE ON TARGET.KeyID1 = SOURCE.KeyID1 AND TARGET.KeyID2 = SOURCE.KeyID2 WHEN MATCHED THEN UPDATE SET TARGET.SomeValue = SOURCE.SomeValue WHEN NOT MATCHED BY TARGET THEN INSERT ( KeyID1 ,KeyID2 ,SomeValue ) VALUES ( SOURCE.KeyID1 ,SOURCE.KeyID2 ,SOURCE.SomeValue ); After the first execution, the contents of #MyTable are sampe results The error message that only comes on the second and subsequent executions: > Msg 8672, Level 16, State 1, Line 4 > The MERGE statement attempted to UPDATE or DELETE the same row more than once. This happens when a target row matches more than one source row. A MERGE statement cannot UPDATE/DELETE the same row of the target table multiple times. **Refine the ON clause to ensure a target row matches at most one source row**, or use the GROUP BY clause to group the source rows. The wording of this error message seems to imply my desired validation should be occurring already. Of course, one obvious thing to do is make the primary key on the TARGET table reject the INSERT, but that is unrelated to my question. P.S. I came across this list of MERGE bugs that doesn't seem to apply here.
Elaskanator (761 rep)
Aug 22, 2019, 03:03 PM • Last activity: Mar 6, 2021, 03:06 AM
0 votes
0 answers
900 views
"Document Failed Validation" what is wrong with my schema validation
I am trying to create a collection with JsonSchema (really its "BsonSchema") validation. This is a document that I want to insert (with bogus values). Note that Id is coming from the mongo c# driver as the ObjectID via StringObjectIdGenerator. { "_id": "60300d5d756a0b91278f2a3d", "UserId": 2, "UserN...
I am trying to create a collection with JsonSchema (really its "BsonSchema") validation. This is a document that I want to insert (with bogus values). Note that Id is coming from the mongo c# driver as the ObjectID via StringObjectIdGenerator. { "_id": "60300d5d756a0b91278f2a3d", "UserId": 2, "UserName": "UserName_1", "Password": "Password_1", "PasswordSalt": "InstallSchema", "CreatedDateUtc": "2021-02-19T19:11:22.6438703Z" } This is the latest of the many schemas that I have tried to use. { "bsonType": "object", "properties": { "_id": { "bsonType": "string" }, "UserId": { "bsonType": "int" }, "UserName": { "bsonType": "string" }, "Password": { "bsonType": "string" }, "PasswordSalt": { "bsonType": "string" }, "CreatedDateUtc": { "bsonType": "date" } }, "required": ["_id", "UserId", "UserName", "Password", "PasswordSalt", "CreatedDateUtc" ] } When inserting the document into the collection it reports "Document failed validation" with no more specific detail. I know the document is correct, its the schema that is not, but I cant figure out what is wrong with it. I've tried different variations of the Id property including _id, as an objectid type, and removing it completely, but I dont know if thats even the problem. Any ideas on how to figure this out? Also I did try creating the collection without schema validation and the document inserts fine. Then in Compass, I ran analysis for the collection and exported what it calls a schema (a very different format from the validator schema) and the properties and types match the schema above.
StingyJack (101 rep)
Feb 19, 2021, 05:59 PM • Last activity: Feb 21, 2021, 10:26 PM
2 votes
2 answers
1582 views
Insert/update data into relational database tables only after administrator approval - how to approach it?
I'm building a relational database that holds 21 tables, which is connected to my website application through PHP. The website has search functionality which allows users to search this database. Also, users can enter new data, through a special form. Because these entries must pass rigorous control...
I'm building a relational database that holds 21 tables, which is connected to my website application through PHP. The website has search functionality which allows users to search this database. Also, users can enter new data, through a special form. Because these entries must pass rigorous control by administrators, I want to make some kind of approval system, available only to the administrators, which can see user input before it appears in the search results, and allow it, return it to the user to make changes, or delete it. My question is what would be an approach for this situation from a relational database design standpoint, and what is the best practice? P.S. I know my question is probably wide, but I could not found an answer by searching the internet or other posts on this website.
Boris J. (25 rep)
Nov 1, 2018, 07:26 PM • Last activity: Sep 27, 2020, 07:53 AM
0 votes
1 answers
16967 views
Validation logic for Mobile Number Column in SQL Server
I am creating a validation script and need help with mobile number validation logic with below details. Mobile number must be: - 10 digits in length. - Cannot begin with a 1 or a 0 (For example:1144684679 or 04468-4679 - Cannot contain all the same digit. For example: 5555555555 I am using PATINDEX...
I am creating a validation script and need help with mobile number validation logic with below details. Mobile number must be: - 10 digits in length. - Cannot begin with a 1 or a 0 (For example:1144684679 or 04468-4679 - Cannot contain all the same digit. For example: 5555555555 I am using PATINDEX for this but could not come up with the proper result.
deep kachhawa (47 rep)
Apr 30, 2020, 09:32 AM • Last activity: Apr 30, 2020, 03:22 PM
1 votes
1 answers
428 views
Always Encrypted Data Validation
Just a question in how could i validate duplicate data's in my database if columns is set to be encrypted(deterministic) in db? I've tried running the following code, but receive the error below. ``` declare '@test1' varchar(max) = 'XXXXXX' select * from testing where t1 = '@test1' ``` > Encryption...
Just a question in how could i validate duplicate data's in my database if columns is set to be encrypted(deterministic) in db? I've tried running the following code, but receive the error below.
declare '@test1' varchar(max) = 'XXXXXX'
select * from testing where t1 = '@test1'
> Encryption scheme mismatch for columns/variables '@test1'. The > encryption scheme for the columns/variables is (encryption_type = > 'PLAINTEXT') and the expression near line '4' expects it to be > (encryption_type = 'DETERMINISTIC', encryption_algorithm_name = > 'AEAD_AXS_XXX_CBA_HMAC_SHA_XXX', column_encryption_key_name = > 'CEK_Auto1', column_encryption_key_database_name = 'XXX') (or weaker). How can I find the source of this error, or is there another way to determine duplicate rows?
Nathan (29 rep)
Sep 4, 2019, 11:07 AM • Last activity: Sep 4, 2019, 01:00 PM
1 votes
0 answers
30 views
Selecting Duplicates on All Fields
I have an MS Access (no laughing at the back) database I've used to import a bunch of IIS logs into. Having looked at the Excel files I pulled these in from, I'm worried going by the dates that some of the IIS files might have full duplicates (i.e. where every single field is identical). I know how...
I have an MS Access (no laughing at the back) database I've used to import a bunch of IIS logs into. Having looked at the Excel files I pulled these in from, I'm worried going by the dates that some of the IIS files might have full duplicates (i.e. where every single field is identical). I know how to select duplicates on an individual row, but how can I issue an SQL query that will show me rows where **every** field is identical, so I don't get skewed results? If possible, some guidance on how to then have these rows deleted from the table without having to do a million delete queries with the individual PK IDs would be great too. Example data: enter image description here In the above I'd be looking to identify rows 1 and 2 as they are exact duplicates, but 3 and 4 are okay as they share some but not all values. So the report would ideally come back with 1 and 2, so I could then note the IDs and delete all but 1 of the duplicates.
user788561 (11 rep)
Jul 26, 2019, 02:52 PM • Last activity: Jul 30, 2019, 09:09 AM
1 votes
0 answers
38 views
How to use validation rule within database with drop down menus.
Building a scheduling database using access 2013 for my company. I currently have it set up so that only employees that have a rating of 3 or more for the particular job show up when you use the drop down menu. However since we have several associates that meet that criteria for multiple positions t...
Building a scheduling database using access 2013 for my company. I currently have it set up so that only employees that have a rating of 3 or more for the particular job show up when you use the drop down menu. However since we have several associates that meet that criteria for multiple positions there is the risk that the supervisors may double schedule someone. I am wondering if there is a way I can have a dialogue or warning pop up if the same name is added to the schedule in two places. If you need more information please let me know! I feel like this is way easier than I am making it.
Daniel (11 rep)
Nov 29, 2018, 02:40 PM
20 votes
2 answers
24065 views
How do I determine why a MongoDB document is failing validation?
How do I determine why a MongoDB document insert is failing validation? All I get back is a writeError that says "Document failed validation", which isn't very helpful. (This happens often, and I'd like to understand how to properly debug these, rather than ask for help with a specific example.)
How do I determine why a MongoDB document insert is failing validation? All I get back is a writeError that says "Document failed validation", which isn't very helpful. (This happens often, and I'd like to understand how to properly debug these, rather than ask for help with a specific example.)
Jonathan Wheeler (303 rep)
Aug 22, 2016, 11:38 PM • Last activity: Jul 13, 2018, 07:15 PM
1 votes
1 answers
1273 views
MySQL-load data infile- invalid data validation
Is it possible to validate the data and load it in a table using `load data infile`? This is the query load data local infile 'C:/Users/970685/Desktop/SFcheck.csv' into table wow_macro.starting_footage_stg fields terminated by ',' IGNORE 1 LINES FIELDS ESCAPED BY \N (store_id,category_id,footage,pla...
Is it possible to validate the data and load it in a table using load data infile? This is the query load data local infile 'C:/Users/970685/Desktop/SFcheck.csv' into table wow_macro.starting_footage_stg fields terminated by ',' IGNORE 1 LINES FIELDS ESCAPED BY \N (store_id,category_id,footage,plan_id,@merch_id) ; This is data in CSV: store_id category-id footage plan_id merch_id 1 1 3 1 fdfd All the fields are INT in the table but when I execute the query the value 'fdfd' is not validated but saved as 0. (I have also given not null in the table and have not set any default value)
SHEENA (11 rep)
Oct 21, 2015, 05:55 AM • Last activity: Jun 4, 2017, 10:24 AM
0 votes
1 answers
3021 views
vba Excel data Validation error 'Application-defined or object-defined error'
I've read loads of similar questions and answers, but keep getting error. What I do is I export query via DoCmd.TransferSpreadsheet acExport, 8, "blabla", outputFile, True Then I'm attempting to add validation to a range in my new excel file. Here I get an error on setting `AlertStyle:=xlValidAlertS...
I've read loads of similar questions and answers, but keep getting error. What I do is I export query via DoCmd.TransferSpreadsheet acExport, 8, "blabla", outputFile, True Then I'm attempting to add validation to a range in my new excel file. Here I get an error on setting AlertStyle:=xlValidAlertStop, Operator:=xlBetween, (no errors without them, but no validation is set as well, obviously) [Application-defined or object-defined error ] Running same code from Excel - cause no issues! Sub ApplyExcelFormating(ByVal myFile As String, ByVal mySpreadsheet As String) Dim myValidationList(1) As String myValidationList(0) = "YES" myValidationList(1) = "OK" Dim XL As Object, WB As Object, WKS As Object Set XL = CreateObject("Excel.Application") XL.Visible = True XL.DisplayAlerts = True Set WB = XL.Workbooks.Open(myFile) WB.Activate Set WKS = WB.worksheets(mySpreadsheet) Dim lastrow As Integer lastrow = WKS.UsedRange.Rows.Count WKS.Range("K2:K" & lastrow).Activate WKS.Range("K2:K" & lastrow).Select Only these two cause error: 'AlertStyle:=xlValidAlertStop, Operator:=xlBetween, With WKS.Range("K2:K" & lastrow).Validation .Delete .Add Type:=xlValidateList, AlertStyle:=xlValidAlertStop, Operator:=xlBetween, Formula1:=Join(myValidationList, ",") .IgnoreBlank = True .InCellDropdown = True .InputTitle = "" .ErrorTitle = "" .InputMessage = "" .ErrorMessage = "" .ShowInput = True .ShowError = True End With WB.Save XL.Workbooks.Close XL.Visible = True End Sub
Choper (11 rep)
Dec 12, 2016, 02:25 PM • Last activity: May 28, 2017, 11:06 PM
3 votes
2 answers
5873 views
Loop through list of fields, check against lookup table
I have a list of fields: ### FIELD_DOMAIN_ENG_VW +-------------+------------+-------------+ | TABLE_NAME | FIELD_NAME | DOMAIN_NAME | +-------------+------------+-------------+ | ENG.TABLE_1 | FIELD_1 | DOMAIN_ABC | | ENG.TABLE_1 | FIELD_2 | DOMAIN_XYZ | | ENG.TABLE_2 | FIELD_1 | DOMAIN_XYZ | +-----...
I have a list of fields: ### FIELD_DOMAIN_ENG_VW +-------------+------------+-------------+ | TABLE_NAME | FIELD_NAME | DOMAIN_NAME | +-------------+------------+-------------+ | ENG.TABLE_1 | FIELD_1 | DOMAIN_ABC | | ENG.TABLE_1 | FIELD_2 | DOMAIN_XYZ | | ENG.TABLE_2 | FIELD_1 | DOMAIN_XYZ | +-------------+------------+-------------+ The view looks at all the tables in a geodatabase , and lists any fields that have a domain associated with them (a domain is the GIS equivalent of a lookup table/validation table). The underlying tables look like this: ### TABLE_1 +--------------+--------------+ | FIELD_1 | FIELD_2 | | {DOMAIN_ABC} | {DOMAIN_XYZ} | +--------------+--------------+ | A | X | | B | Y | | C | zzzz | | BLACK SHEEP | | +--------------+--------------+ ### TABLE_2 +--------------+--------------+ | FIELD_1 | FIELD_2 | | {DOMAIN_XYZ} | | +--------------+--------------+ | Z | ... | | Y | | | X | | | asdf | | +--------------+--------------+ The domains look like this: ### DOMAIN_VALUES_VW +------------+------+-------------+ | DOMAIN | CODE | DESCRIPTION | +------------+------+-------------+ | DOMAIN_ABC | A | EH | | DOMAIN_ABC | B | BEE | | DOMAIN_ABC | C | SEE | +------------+------+-------------+ | DOMAIN_XYZ | X | EX | | DOMAIN_XYZ | Y | WHY | | DOMAIN_XYZ | Z | ZEE | +------------+------+-------------+ The source is an xml column in a single system table ; I've extracted all the domains into this view. ## Question For validation purposes, I have made a query that will check if there are values in a field that do not match the corresponding domain: INSERT INTO ENG.CV_ERRORS (TABLE_NAME, FIELD_NAME, ERROR) SELECT 'TABLE_1' AS TABLE_NAME ,'FIELD_1' AS FIELD_NAME ,FIELD_1 AS ERROR FROM ENG.TABLE_1 LEFT JOIN ( SELECT CODE FROM INFRASTR.D_CV_ENG_VW WHERE DOMAIN = 'DOMAIN_ABC' ) ON FIELD_1 = CODE WHERE FIELD_1 IS NOT NULL AND CODE IS NULL +------------+------------+-------------+ | TABLE_NAME | FIELD_NAME | ERROR | +------------+------------+-------------+ | TABLE_1 | FIELD_1 | BLACK SHEEP | +------------+------------+-------------+ However, this query is hardcoded to be run on a single field, in a single table at a time. I need to check all of the fields with domains, in all of the tables in the database - programmatically. How can I do this? I'm pretty sure this can be done with PL/SQL and dynamic SQL, but I'm so new to PL/SQL that it is proving to be rather difficult.
User1974 (1527 rep)
Jan 3, 2017, 05:54 PM • Last activity: Mar 29, 2017, 02:07 AM
Showing page 1 of 20 total questions