Sample Header Ad - 728x90

Are there any edge cases to consider before dropping a duplicate non-clustered index?

0 votes
3 answers
109 views
I am working on cleaning up/optimizing indexes. I have run Find-DbaDbDuplicateIndex in PowerShell on one of my SQL Instances and identified 5 sets of "duplicate" indexes. 1 is a non-clustered index with only 1 key column, no includes, and it matches the key column of the clustered index. The non-clustered index does get a lot of reads, so I am kind of puzzled by why the non-clustered index there would get so many reads when the only column is the PK (The only thing that I can think of is if we have some odd query that only reads that column and doesn't look at any others on that table, or someone is pushing query hints through to use the non-clustered index, though I'd expect that to generate a ton of key lookups, which I do not see on that table). The rest of my duplicates are all non-clustered indexes where there is a unique non-clustered index and an identical non-unique non-clustered index, no included columns on either and key columns are identical. Here I also see the pattern of much higher reads on the non-unique over the unique non-clustered index, though I can at least rationalize that out by the constraints adding some overhead on the unique indexes. Before I start dropping indexes, are there any red flags or edge cases that I should look for in the workload that would suggest that I keep the duplicate non-unique non-clustered indexes?
Asked by Wayne Cochran (35 rep)
Jan 3, 2025, 11:45 AM
Last activity: Jan 6, 2025, 12:12 PM