Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

2 votes
2 answers
118 views
sp_describe_first_result_set with json columns
I'm working on an analyzer that accepts a SQL Server query and returns information about all columns in the first result set - until `for` clause is used, this is pretty straightforward with `sp_describe_first_result_set`. However, the queries I expect heavily use `for json` and `sp_describe_first_r...
I'm working on an analyzer that accepts a SQL Server query and returns information about all columns in the first result set - until for clause is used, this is pretty straightforward with sp_describe_first_result_set. However, the queries I expect heavily use for json and sp_describe_first_result_set reports columns using this clause (correctly) as nvarchar. I've tried rolling my parser with SqlScriptDOM, but parsing got complicated fast. I've also toyed with restructuring the inner queries via said parser so sp_describe_first_result_set could be run against them (basically extracting them to be top-level). This approach also hit the complexity wall. Is there any reasonable way to achieve this? When working in JetBrains' IDEs, I've checked that their built-in SQL query editor has this functionality, as hovering over a star in an inner query shows the exact columns to be selected in a tooltip.
Matěj Štágl (205 rep)
Jun 7, 2025, 10:08 AM • Last activity: Jun 9, 2025, 06:34 PM
0 votes
1 answers
246 views
Reorg SYSIBM tables to reduce extents (XT) - change priqty & secQTY
I have a new DB2 V12 database on z/OS 2.4 z13, with hundreds of programs to rebind, such as: sysibm.syspackage, sys.columns, sys.tables, etc. These will grow, hence the need to reorganize the tablespace for performance. Maintenance window is one hour, so just one tables space is being done. 1. Is re...
I have a new DB2 V12 database on z/OS 2.4 z13, with hundreds of programs to rebind, such as: sysibm.syspackage, sys.columns, sys.tables, etc. These will grow, hence the need to reorganize the tablespace for performance. Maintenance window is one hour, so just one tables space is being done. 1. Is reorg run on system tables the same as for user tables? 2. Is there special IBEGENR (sp?) VSAM redefine needed for system tables. Sample JCL:
//REORGS1 EXEC PGM=DSNUTILB,PARM='MT03,REORG1DB' 
   //STEPLIB  DD  DSN=DSNC10.SDSNLOAD,DISP=SHR 
   //SYSREC   DD  DSN=&&SYSREC,DISP=(NEW,DELETE,DELETE),       
   //             SPACE=(TRK,(9000,5000),,,ROUND),UNIT=SYSDA 
 LISTDEF TBLSLIST                                     
     INCLUDE TABLESPACES DATABASE TR032             
 REORG                                                
    TABLESPACE LIST TBLSLIST   COPYDDN(CDD1) LOG NO  
    SHRLEVEL REFERENCE                               
    SORTKEYS SORTDATA SORTDEVT SYSDA                 
    STATISTICS TABLE(ALL) INDEX(ALL)
`
TechnoCaveman (1 rep)
Feb 17, 2022, 01:43 PM • Last activity: May 25, 2025, 02:01 AM
1 votes
0 answers
63 views
Looking for a query to get a view's referenced columns in another database
Getting the referenced columns from a view that references data to it's own database if pretty straightforward with the INFORMATION_SCHEMA.VIEW_COLUMN_USAGE-tables but I'm struggling with getting the referenced columns and tables that are in another database. I've come to this but it lacks the refer...
Getting the referenced columns from a view that references data to it's own database if pretty straightforward with the INFORMATION_SCHEMA.VIEW_COLUMN_USAGE-tables but I'm struggling with getting the referenced columns and tables that are in another database. I've come to this but it lacks the referenced columns. Do I need another join with a sys-table to get this information? SELECT o.name , r.referenced_server_name , r.referenced_database_name , r.referenced_entity_name FROM sys.objects o JOIN sys.sql_expression_dependencies r ON o.object_id = r.referencing_id In the meanwhile I've found this one but it would be handy to also have the target columns in the list, now you don't know what column in the view references to which source table/columns select distinct 'PersonDetails', cols.* from sys.sql_expression_dependencies objs outer apply sys.dm_sql_referenced_entities (OBJECT_SCHEMA_NAME(objs.referencing_id) + N'.' + object_name(objs.referencing_id), N'OBJECT' ) as cols where objs.referencing_id = object_id('PersonDetails')
chittybang (151 rep)
Apr 8, 2025, 08:33 AM • Last activity: Apr 8, 2025, 02:43 PM
1 votes
1 answers
62 views
How to resolve pg_locks.objid to a meaningful object name
I'm trying to work out what a particular transaction is locking that prevents other queries from executing. I queried pg_locks like this: ```sql select c.relname, l.* from pg_locks l join pg_database d on l.database = d.oid left join pg_class c on l.relation = c.oid where d.datname = 'MYDB' and pid...
I'm trying to work out what a particular transaction is locking that prevents other queries from executing. I queried pg_locks like this:
select c.relname, l.*
from pg_locks l
join pg_database d on l.database = d.oid
left join pg_class c on l.relation = c.oid
where d.datname = 'MYDB' and pid = MYPID
That returned a bunch of results like this | relname | locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath | waitstart | | ------- | -------- | -------- | -------- | ---- | ----- | ---------- | ------------- | ------- | -------- | -------- | ------------------ | ------ | ------------------- | ------- | -------- | ---------- | | NULL | object | 14035700 | NULL | NULL | NULL | NULL | NULL | 2606 | 14051522 | 0 | 4/431226 | 793556 | AccessExclusiveLock | true | false | NULL | relation was NULL for most (not all) results. For these, classid and objid were non-null (as in the example above). How do I resolve those to useful object identifiers? According to the [pg_locks docs](https://www.postgresql.org/docs/current/view-pg-locks.html) locks are "identified by class OID and object OID, in the same way as in pg_description". When I query [pg_description](https://www.postgresql.org/docs/current/catalog-pg-description.html) , however, I cannot find those oids. The values of the objoid column there are all under 20000 - there is nothing like 805681436. **Edit:** updated query based on Laurenz Albe's answer (thanks!)
select l.granted, l.locktype, l.mode, l.classid, l.objid, l.classid::regclass
	, c.relname, cns.conname -- relation
	, cns.conrelid::regclass as constraint_table, cns.contype -- constraint
	, trg.tgrelid::regclass as trigger_table, trg.tgname -- trigger
from pg_locks l
join pg_database d on l.database = d.oid
left join pg_class c on l.relation = c.oid
left join pg_constraint cns on l.objid = cns.oid and l.classid::regclass = 'pg_constraint'::regclass
left join pg_trigger trg on l.objid = trg.oid and l.classid::regclass = 'pg_trigger'::regclass
where d.datname = 'MYDB' and pid = MYPID
EM0 (250 rep)
Mar 10, 2025, 04:30 PM • Last activity: Mar 17, 2025, 05:25 PM
2 votes
2 answers
230 views
Azure MS SQL: grant sys.* and msdb.dbo.* select permissions for Grafana monitoring
I'm trying to monitor Azure MS SQL databases using [this Grafana dasboard](https://grafana.com/grafana/dashboards/21378-microsoft-sql-server-dashboard/). It executes several queries to tables whose names start with sys and msdb.dbo. I've created a dedicated `grafana` user for this and tried adding t...
I'm trying to monitor Azure MS SQL databases using [this Grafana dasboard](https://grafana.com/grafana/dashboards/21378-microsoft-sql-server-dashboard/) . It executes several queries to tables whose names start with sys and msdb.dbo. I've created a dedicated grafana user for this and tried adding the user to db.datareader permissions, as well as GRANT SELECT ON SCHEMA :: sys TO [grafana];, and GRANT view database state TO [grafana];, but I'm still getting permission errors. Trying to directly grant select on specific tables (e.g. GRANT SELECT ON sys.dm_exec_query_stats TO grafana;) on the master database, besides being tedious, seems to not be allowed ever for the master user, as I get SQL Error [S0001]: Permissions for system stored procedures, server scoped catalog views, and extended stored procedures cannot be changed in this version of SQL Server.. What's the right way of doing this?
barmanthewise (141 rep)
Nov 12, 2024, 07:52 AM • Last activity: Nov 13, 2024, 12:39 AM
2 votes
1 answers
221 views
Permissions to be granted to developers to investigate jobs, queries, and packages
I have just installed a new sql server 2014 server, migrated some databases there and connected some applications to them. These servers have also SSAS and SSIS. I have a group of developers there were helping with the connections and doing all the last developments on packages, and data warehousing...
I have just installed a new sql server 2014 server, migrated some databases there and connected some applications to them. These servers have also SSAS and SSIS. I have a group of developers there were helping with the connections and doing all the last developments on packages, and data warehousing stuff. Now that everything is working, I want to remove the sysadmin permissions that I had previously granted them. Now, these guys they have some requirements, that I am willing to grant them, so that it is less weight on my shoulders: > 1) They need to see query plans, 2) they need to identify missing > indexes and other expensive operations like key lookups, etc > > 3) they need to query sys views and tables - monitor what procedures > are running, collect WAIT STATS, etc > > 4) they also need to troubleshoot JOBS - so they need to be able to > see them, see the job history, in order to find out what went wrong > > 5) they also need to see everything in the SSIS integration catalog, > so that when something goes wrong inside one of there packages they > are able to figure it out by themselves I have granted them with the following **SERVER permissions**: > 1) view server state > 2) view any definition on the **MSDB database**: I got some ideas from "SQL Server Agent Fixed Database Roles " enter image description here and finally on the **SSISDB database** enter image description here Anything else that I should be aware of?
Marcello Miorelli (17274 rep)
Aug 18, 2015, 11:43 AM • Last activity: Mar 25, 2024, 07:47 PM
0 votes
1 answers
95 views
Metadata query slow on one specific database
I have quite a weird problem (who doesn't). Our ETL Software scans source databases to claim object data. It extracts information from INFORMATION_SCHEMA about objects (tables and views) ``` SELECT t.[TABLE_SCHEMA], t.[TABLE_NAME], c.[COLUMN_NAME], c.[DATA_TYPE], c.[CHARACTER_MAXIMUM_LENGTH], c.[NUM...
I have quite a weird problem (who doesn't). Our ETL Software scans source databases to claim object data. It extracts information from INFORMATION_SCHEMA about objects (tables and views)
SELECT
t.[TABLE_SCHEMA],
t.[TABLE_NAME],
c.[COLUMN_NAME],
c.[DATA_TYPE],
c.[CHARACTER_MAXIMUM_LENGTH],
c.[NUMERIC_PRECISION],
c.[NUMERIC_SCALE],
c.[DATETIME_PRECISION],
c.[ORDINAL_POSITION],
c.[IS_NULLABLE],
prim.[CONSTRAINT_NAME] AS [PRIMARY_KEY_NAME],
t.[TABLE_TYPE]
FROM [INFORMATION_SCHEMA].[TABLES] t
INNER JOIN [INFORMATION_SCHEMA].[COLUMNS] c
ON t.[TABLE_CATALOG] = c.[TABLE_CATALOG]
AND t.[TABLE_SCHEMA] = c.[TABLE_SCHEMA]
AND t.[TABLE_NAME] = c.[TABLE_NAME]
LEFT OUTER JOIN (
SELECT tc.[CONSTRAINT_NAME], tc.[CONSTRAINT_SCHEMA], tc.[CONSTRAINT_CATALOG], cc.[TABLE_NAME], cc.[COLUMN_NAME]
FROM [INFORMATION_SCHEMA].[TABLE_CONSTRAINTS] tc
INNER JOIN [INFORMATION_SCHEMA].[CONSTRAINT_COLUMN_USAGE] cc
ON tc.[CONSTRAINT_NAME] = cc.[CONSTRAINT_NAME]
AND tc.[CONSTRAINT_SCHEMA] = cc.[CONSTRAINT_SCHEMA]
AND tc.[CONSTRAINT_CATALOG] = cc.[CONSTRAINT_CATALOG]
AND tc.[TABLE_NAME] = cc.[TABLE_NAME]
AND tc.[TABLE_CATALOG] = cc.[TABLE_CATALOG]
AND tc.[TABLE_SCHEMA] = cc.[TABLE_SCHEMA]
WHERE tc.[CONSTRAINT_TYPE] = 'PRIMARY KEY') prim
ON t.[TABLE_CATALOG] = prim.[CONSTRAINT_CATALOG]
AND t.[TABLE_SCHEMA] = prim.[CONSTRAINT_SCHEMA]
AND t.[TABLE_NAME] = prim.[TABLE_NAME]
AND c.[COLUMN_NAME] = prim.[COLUMN_NAME]
ORDER BY t.[TABLE_SCHEMA], t.[TABLE_NAME], c.[ORDINAL_POSITION]
For most databases (depending on the amount of objects and columns) this takes one minute to complete. For one specific database this takes between 10 and 35 minutes! I can't seem to figure out why. I cannot changes the above query (since it's embedded in the software) so i'm looking for a way to improve performance on this one database The query outputs about the same amount of records. I've checked database configuration and those are exactly the same. When i tested with 'SET STATISTICS IO ON' it was clear that there was a lot going on in the less-performing database than there was in the other databases. I've put it in www.statisticsparser.com and this is what it looks like for the less-performing: statistics parsed bad database For comparison this is what is looks like for a normal database: statistics parsed good database Myself, i'm out of option. I've tried purging proc and system cache (with corresponding DBCC) and updating statistics on the whole database but it (as expected) did no good. Any help is appreciated! Kind regards EDIT: I've added the query plans to compare with a database with about the same size and structure on the same server (which doesn't have the performance problem: https://file.io/ODajcfrzvGTP It contains 2 files: _fast is from the database *without* any problems _slow is from the database *with* problems
LonelyDBA92 (5 rep)
Mar 12, 2024, 01:20 PM • Last activity: Mar 22, 2024, 01:32 PM
0 votes
2 answers
168 views
Name in sys.availability_groups_cluster returns double character length
'select name, len(name) as length from sys.availability_groups_cluster' Returns some double values: | name | length | | ------------- | ------- | | AG_QBUS_MFG01 | 13 | | AG_QBUS003 | 20 | | TEST_ONLY | 18 |
'select name, len(name) as length from sys.availability_groups_cluster' Returns some double values: | name | length | | ------------- | ------- | | AG_QBUS_MFG01 | 13 | | AG_QBUS003 | 20 | | TEST_ONLY | 18 |
CrushingIT (33 rep)
Mar 10, 2023, 06:57 PM • Last activity: Nov 18, 2023, 05:47 AM
0 votes
0 answers
47 views
Mysql system tables showing incorrect cardinality for columns
I’m new to mysql. I have encountered a weird problem on our prod database. We have a table with primary key `a` and an index on some other `b` column. Below’s the query: ``` select b from my_table where a in (1,2,3,4,5,6,7,8,9,10,11,12....n) ; ``` Ideally for such cases my query should use PRIMARY K...
I’m new to mysql. I have encountered a weird problem on our prod database. We have a table with primary key a and an index on some other b column. Below’s the query:
select b from my_table where a in (1,2,3,4,5,6,7,8,9,10,11,12....n) ;
Ideally for such cases my query should use PRIMARY KEY index of column a. Also, I have verified that by default my column a should have all unique values which it already has. And my column b has several of duplicate values. But, when I checked execution plan, the query is using index on column b (b-idx) rather than PRIMARY KEY index. I ran analyze on this table, but even after there are no changes into execution plan. I can’t run optimize since it’s on prod. Then, I checked for show index from my_table ; And, there I found quite weird thing, Cardinality value for column a was lower than column b . I guess by default optimizer picks up higher cardinality column, so it’s picking column b-idx Please guide me what to do in such cases. Is this any bug in mysql or expected behaviour ? I couldn’t find how to update stats for specific index on prod table. I want our query to use right index by default
Shiwangini (380 rep)
Sep 7, 2023, 05:33 PM
19 votes
1 answers
1223 views
DISTINCT not reducing two equal values to one
Can anyone explain the below situation, where two seemingly equal values are not reduced by `DISTINCT`? [![screenshot showing two values for 'SBS_UCS_DISPATCH'][1]][1] The query above is `SELECT DISTINCT name FROM master.sys.dm_os_spinlock_stats where name = 'SBS_UCS_DISPATCH';` The equivalent `SELE...
Can anyone explain the below situation, where two seemingly equal values are not reduced by DISTINCT? screenshot showing two values for 'SBS_UCS_DISPATCH' The query above is SELECT DISTINCT name FROM master.sys.dm_os_spinlock_stats where name = 'SBS_UCS_DISPATCH'; The equivalent SELECT name FROM master.sys.dm_os_spinlock_stats where name = 'SBS_UCS_DISPATCH' GROUP BY name; also does the same, and adding HAVING COUNT(1) > 1 does not yield the rows. @@VERSION is *Microsoft SQL Server 2019 (RTM-CU13) (KB5005679) - 15.0.4178.1 (X64) Sep 23 2021 16:47:49 Copyright (C) 2019 Microsoft Corporation Enterprise Edition: Core-based Licensing (64-bit) on Windows Server 2016 Standard 10.0 (Build 14393: )*
jimbobmcgee (529 rep)
Aug 3, 2023, 02:55 PM • Last activity: Aug 4, 2023, 03:14 PM
2 votes
1 answers
1370 views
Operator "~<~" uses varchar_pattern_ops index while normal ORDER BY clause doesn't?
Let's say I have a table with 1 000 000 records. Structure of table is: create table individual ( id serial primary key, surname varchar(128), "name" varchar(128), patronymic varchar(128), birth_dt date ) I create a composite index. #### INDEX 1 create index on individual using btree (upper(surname)...
Let's say I have a table with 1 000 000 records. Structure of table is: create table individual ( id serial primary key, surname varchar(128), "name" varchar(128), patronymic varchar(128), birth_dt date ) I create a composite index. #### INDEX 1 create index on individual using btree (upper(surname) varchar_pattern_ops , upper("name") varchar_pattern_ops, patronymic varchar_pattern_ops, birth_dt); Docs state that varchar_pattern_ops should be applied when using LIKE or pattern match in query. Conclusion: this index will not be used in query below, even it gets only 10 row from 1 000 000. #### QUERY 1 select * from individual order by upper(surname), upper("name"), upper(patronymic), birth_dt limit 10; and even more, docs recommends to create an index without varchar_pattern_ops as well. #### INDEX 2 create index on individual using btree (upper(surname), upper("name"), upper(patronimyc), birth_dt); Then a query using LIMIT will use this index. I found a cheat to force Postgres to use first index on Postgres users forum. It is operator ~<~. #### QUERY 2 select * from individual order by upper(surname) using ~<~ , upper("name") using ~<~ , upper(patronymic) using ~<~ , birth_dt limit 100; In this case **INDEX 1** will be used even if **INDEX 2** doesn't exists. I tried to investigate to discover why it happens, but failed. There are some system tables like pg_operator, which (I think) link operator ~<~ to some functions that most probably uses LIKE or regular expressions. I ran **QUERY 1** and **QUERY 2** a few times and compared result manually. It looks like operator ~<~ gives correct result, but I didn't risk anything and just create a normal index anyway. I am still interested how the Postgres planner decides which index to use index where it meets the operator ~<~ in **QUERY 1**.
simar (203 rep)
Mar 10, 2015, 10:57 AM • Last activity: Aug 1, 2023, 05:08 PM
4 votes
3 answers
4393 views
How do I list the constraints of a DOMAIN?
Consider the following scenario: CREATE DOMAIN dom_zipcode AS text; ALTER DOMAIN dom_zipcode ADD CONSTRAINT zipchk CHECK (char_length(VALUE) = 5); Now, if I want to drop that constraint with [`ALTER DOMAIN`][1], [The manual says][1]: ALTER DOMAIN name DROP CONSTRAINT [ IF EXISTS ] constraint_name [...
Consider the following scenario: CREATE DOMAIN dom_zipcode AS text; ALTER DOMAIN dom_zipcode ADD CONSTRAINT zipchk CHECK (char_length(VALUE) = 5); Now, if I want to drop that constraint with ALTER DOMAIN , The manual says : ALTER DOMAIN name DROP CONSTRAINT [ IF EXISTS ] constraint_name [ RESTRICT | CASCADE ] But how can we find constraint_name? \dD only shows the constraint's definition (CHECK statement). \dD dom_zipcode ; List of domains Schema | Name | Type | Modifier | Check --------+-------------+------+----------+-------------------------------- public | dom_zipcode | text | | CHECK (char_length(VALUE) = 5) (1 row) I can dump the schema using pg_dump, but I believe there must exist a more elegant way to establish this using the psql terminal.
Adam Matan (12079 rep)
Nov 24, 2014, 05:22 PM • Last activity: Nov 21, 2022, 11:11 AM
8 votes
5 answers
13857 views
Capturing datetime of change in SQL Server CDC
So we've started exploring using change data capture on one of our production databases. We'd like to know the datetime of each change. Reading through [walkthrough's and tutorials][1] etc it seems that the standard approach is to use the LSN to relate to the `cdc.lsn_time_mapping` system table. Thi...
So we've started exploring using change data capture on one of our production databases. We'd like to know the datetime of each change. Reading through walkthrough's and tutorials etc it seems that the standard approach is to use the LSN to relate to the cdc.lsn_time_mapping system table. This approach works but is not very straightforward nor performant when talking about 100's of thousands of changes a day. In a test environment I made the following adjustment to the change track tables. I issued an ALTER TABLE statement to add a column to the end called [__ChangeDateTime] and made it's default value GetDate(). The approach seems to work, the change tracking still functions normally, the datetime's are being captured. *But mucking around with system tables makes me a little nervous.* If this isn't a system field that Microsoft added from the beginning they must of had their reasons. Since they instead opted for the LSN to cdc.lsn_time_mapping approach am I setting myself up for problems by creating my own hack this way? **UPDATE:** Discovered during testing that GetDate() at times isn't precise enough for our needs - multiple changes sharing the same time. Recommend using sysdatetime() and datetime2 to move the value out to the nanosecond. Option for 2008+ only obviously.
RThomas (3446 rep)
Feb 1, 2013, 06:08 PM • Last activity: Oct 11, 2022, 08:35 PM
3 votes
1 answers
427 views
sys.dm_exec_describe_first_result_set returns wrong data type
I have two SQL Servers 2016 connected with Linked Server. When I query `sys.dm_exec_describe_first_result_set` from the remote server, it returns numeric datatype for column table instead of decimal. First query (executed from remote server) select * from sys.dm_exec_describe_first_result_set (N'sel...
I have two SQL Servers 2016 connected with Linked Server. When I query sys.dm_exec_describe_first_result_set from the remote server, it returns numeric datatype for column table instead of decimal. First query (executed from remote server) select * from sys.dm_exec_describe_first_result_set (N'select top 1 * from LinkedServerFoo.DatabaseFoo.dbo.TargetTable', NULL, 0) [source] Second query (executed from sql server) select * from sys.dm_exec_describe_first_result_set (N'select top 1 * from DatabaseFoo.dbo.TargetTable', NULL, 0) [source] Result enter image description here This is an issue for me since I compare the data types based on output of these queries. Any idea what might be an issue, why is that and how it can be solved? * There is also the same issue with smalldatetime where sys table wrongly returns datetime
Muflix (1099 rep)
Sep 30, 2022, 12:16 PM • Last activity: Sep 30, 2022, 01:45 PM
0 votes
1 answers
476 views
In which system table does Postgres store FK cardinality?
I'm trying to reverse engineer a set of PG databases into a Data Model. I've found everything I need to define all the sub-models, tables, columns and FK constraints, but can't find anywhere that details whether the relation is 0:1, 1:many, 1:0..1 etc I've looked in ``` pg_constraint information_sch...
I'm trying to reverse engineer a set of PG databases into a Data Model. I've found everything I need to define all the sub-models, tables, columns and FK constraints, but can't find anywhere that details whether the relation is 0:1, 1:many, 1:0..1 etc I've looked in
pg_constraint
information_schema.check_constraint_routine_usage
information_schema.check_constraints
information_schema.constraint_column_usage
information_schema.constraint_table_usage
information_schema.domain_constraints
information_schema.referential_constraints
information_schema.table_constraints
information_schame.key_column_usage
but there's nothing (obvious) which details the cardinality. Does anyone know where this can be found?
khafka (79 rep)
Nov 26, 2019, 02:28 PM • Last activity: Sep 30, 2022, 06:01 AM
2 votes
1 answers
2000 views
DBCC CLONEDATABASE returns error: Cannot insert duplicate key row in object 'sys.syssingleobjrefs
I'm trying to run the following line of code: ```DBCC CLONEDATABASE ([DBName],[DBName_Staging]) WITH VERIFY_CLONEDB``` And I'm getting the following error: `Msg 2601, Level 14, State 1, Line 15 Cannot insert duplicate key row in object 'sys.syssingleobjrefs' with unique index 'clst'. The duplicate k...
I'm trying to run the following line of code:
CLONEDATABASE ([DBName],[DBName_Staging]) WITH VERIFY_CLONEDB
And I'm getting the following error: `Msg 2601, Level 14, State 1, Line 15 Cannot insert duplicate key row in object 'sys.syssingleobjrefs' with unique index 'clst'. The duplicate key value is (5, 51, 0).` I'm running on **Microsoft SQL Server 2016 (SP3) (KB5003279) - 13.0.6300.2 (X64)** From what I've read a possible solution is deleting the duplicate row from *model*. I know how to do it, I'm just not sure if it's safe, or if this is the best solution. Any suggestions? ---------------------------- Based the query in this sqlskills article the "duplicate" objects in model are objects like queue_messages_99999999, syscommittab, filestream_tombstone_99999999, etc. I am trying to create copies of 30+ databases so moving/removing the objects in the user database side is complicated at best. Is it safe to remove these *system* objects?
Kenneth Fisher (24317 rep)
Jul 27, 2022, 10:31 PM • Last activity: Jul 28, 2022, 02:33 PM
0 votes
1 answers
1277 views
Select Column data, from sys.column info
Either this is impossible, or I can't write a good internet search. I have a query to get the table and column information for our database: SELECT t.name [table], c.name [column] FROM sys.tables t LEFT JOIN sys.columns c on t.object_id = c.object_id What I'd like to do is add in something like (Sel...
Either this is impossible, or I can't write a good internet search. I have a query to get the table and column information for our database: SELECT t.name [table], c.name [column] FROM sys.tables t LEFT JOIN sys.columns c on t.object_id = c.object_id What I'd like to do is add in something like (Select AVG(LEN(c.name)) from t.name) [AVG LEN] I'd obviously build in some checks to only run on varchar for Len, and regular AVG for numerics, but that part would be easy. It is getting SQL to actually treat the names correctly that I'm struggling with. For varChars, I'm aiming to get the min, avg, max, and mode lengths of the string values. I only listed the AVG to keep it simple. I'm also going to do some work with numeric values as well. This way as I go to develop controls for these fields, I can better estimate the size to make the controls. I could do this column by column, but I'd like to be able to just be able to run one query and get my tables and columns in one nice chart.
CervonMi (3 rep)
Feb 19, 2020, 03:11 AM • Last activity: Jul 3, 2022, 03:16 PM
3 votes
1 answers
337 views
How to decrypt the name of columns from view?
I had to deal with a third-party database, I am trying to get all columns used in a view but I got them encrypted e.g: instead of getting `int` for `NUMERO_SALARIE` I got `TNumSal` why? ```SQL DECLARE @TableViewName NVARCHAR(128) SET @TableViewName=N'DP_SALARIE' SELECT b.name AS ColumnName, c.name A...
I had to deal with a third-party database, I am trying to get all columns used in a view but I got them encrypted e.g: instead of getting int for NUMERO_SALARIE I got TNumSal why?
DECLARE @TableViewName NVARCHAR(128)
SET @TableViewName=N'DP_SALARIE'
SELECT b.name AS ColumnName, c.name AS DataType, b.max_length AS Length
FROM sys.all_objects a
INNER JOIN sys.all_columns b
ON a.object_id=b.object_id
INNER JOIN sys.types c
ON b.user_type_id=c.user_type_id
WHERE a.Name=@TableViewName
AND a.type IN ('U','V')
` enter image description here Is there any way to decrypt them?
TAHER El Mehdi (292 rep)
Jun 21, 2022, 07:53 AM • Last activity: Jun 21, 2022, 09:01 AM
15 votes
2 answers
25657 views
How to get the schema name of a table of type regclass in PostgreSQL?
In writing a function to test if a column `col_name` exists in a table `_tbl`, I'd like to extract the table's schema name, which is passed into the function as a `regclass` parameter (for security??). CREATE OR REPLACE FUNCTION column_exists(_tbl regclass, col_name text) RETURNS bool AS $func$ SELE...
In writing a function to test if a column col_name exists in a table _tbl, I'd like to extract the table's schema name, which is passed into the function as a regclass parameter (for security??). CREATE OR REPLACE FUNCTION column_exists(_tbl regclass, col_name text) RETURNS bool AS $func$ SELECT EXISTS ( SELECT 1 FROM information_schema.columns WHERE table_schema=get_schema($1) AND table_name=get_table($1) AND column_name=$2 ); $func$ LANGUAGE sql; So if the table name is 'staging.my_table'::regclass, I'd like to get staging from an imaginary function get_schema. *Can I just implement this function with e.g. split_part(_tbl::text, '.', 1)?* *In addition, is it guaranteed that the table name _tbl, when converted to text, will always have a schema name? (i.e. not omitting things such as public.)* I'm not very familiar with the regclass type. I searched but couldn't find how to extract the schema name, and just wanted to ask first before re-inventing wheels.
tinlyx (3820 rep)
Mar 14, 2018, 02:04 AM • Last activity: Mar 2, 2022, 02:28 PM
1 votes
1 answers
77 views
Constantly querying system views [Oracle, MSSQL, PostgreSQL]
I'm a kind-of-experienced database developer that just got into the DBAdministration world. I've been told that in both Oracle and MSSQL is definitely a terrible idea to run queries such as the below example, **constantly** every, let's say, 5 seconds. Since it might cause memory fragmentation. ```...
I'm a kind-of-experienced database developer that just got into the DBAdministration world. I've been told that in both Oracle and MSSQL is definitely a terrible idea to run queries such as the below example, **constantly** every, let's say, 5 seconds. Since it might cause memory fragmentation.
SELECT  *
FROM    V_$SQL_MONITOR; --oracle
I've done some research, but I couldn't find any official (or unofficial, for what is worth) documentation discouraging to do such a thing. Could someone experienced in the DBA world illustrate for me if this is indeed a bad idea, what memory fragmentation is, and if this practice also applies to PostgreSQL? Thank you in advance!
glezo (123 rep)
Feb 18, 2022, 01:12 PM • Last activity: Feb 18, 2022, 02:19 PM
Showing page 1 of 20 total questions