Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
374 views
Get current execution metrics from sys.dm_exec_sessions
I've got a pretty complex batch processing procedure that takes hours to run on real data. I've found that some CPU and I/O counters are stored in the `master.sys.dm_exec_sessions` table, and so thought that maybe I can use them to measure which parts of the procedure are taking most resources. Howe...
I've got a pretty complex batch processing procedure that takes hours to run on real data. I've found that some CPU and I/O counters are stored in the master.sys.dm_exec_sessions table, and so thought that maybe I can use them to measure which parts of the procedure are taking most resources. However, it seems that repeatedly querying this table within a single procedure always returns the same values. I know I can trigger update of these statistics by issuing the GO statement, but of course it is not possible to do that inside a procedure. Are there any other ways to read these counters that would make sure they are up to date? MWE: CREATE OR ALTER PROCEDURE dbo.MineBitcoins @Prefix INT AS BEGIN DECLARE @It INT = 0 WHILE COALESCE(TRY_CAST(LEFT(HASHBYTES('SHA2_256', HASHBYTES('SHA2_256', CONVERT(VARCHAR, @It))), @Prefix) AS INT), 1) 0 BEGIN PRINT @It SET @It = @It + 1 END SELECT @It END GO CREATE OR ALTER PROCEDURE dbo.MeasureTimeOfAStupidQuery @Prefix INT AS BEGIN SELECT cpu_time, reads, logical_reads, writes FROM master.sys.dm_exec_sessions WHERE session_id = @@SPID EXEC dbo.MineBitcoins @Prefix SELECT cpu_time, reads, logical_reads, writes FROM master.sys.dm_exec_sessions WHERE session_id = @@SPID END GO EXEC dbo.MeasureTimeOfAStupidQuery @Prefix = 3 (it takes about 30 seconds on my test database instance).
liori (289 rep)
Aug 25, 2021, 11:54 AM • Last activity: Jun 4, 2025, 02:03 AM
-1 votes
1 answers
32 views
Collect OS metrics in Idera
I use idera for monitoring database servers but recently I noticed that some servers (no all of them) are not monitoring the disk part properly. The error I get is: TITLE: Monitored SQL Server Properties ------------------------------ The connection test failed. ------------------------------ ADDITI...
I use idera for monitoring database servers but recently I noticed that some servers (no all of them) are not monitoring the disk part properly. The error I get is: TITLE: Monitored SQL Server Properties ------------------------------ The connection test failed. ------------------------------ ADDITIONAL INFORMATION: Error validating WMI connection: ------------------------------ Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) (mscorlib) ------------------------------ The tests I have executed are: Restart the winmgmt service Check the integrity of the repository Modify WMI rules in the firewall It is important to mention that the idera connection is made with a sql server user who has sysadmin permissions. Could you advise me about the permissions or configuration that could be missing, please?
Carolina (47 rep)
Mar 27, 2025, 07:24 PM • Last activity: Mar 27, 2025, 08:50 PM
1 votes
1 answers
620 views
Database tablespace space sizes and percent used
I am new to Oracle, and I am not quite understanding the size calculations shown in OEM under tablespaces. I know it should be simple and my well be. I need to monitor the table space but not sure what I am looking at. Should I go after the "Available Space Used(%)" or the "Allocated Space Used(%)"...
I am new to Oracle, and I am not quite understanding the size calculations shown in OEM under tablespaces. I know it should be simple and my well be. I need to monitor the table space but not sure what I am looking at. Should I go after the "Available Space Used(%)" or the "Allocated Space Used(%)" to monitor? If I am running a script to alert when a threshold is met, which % should I use? enter image description here
jmichaelwDBA (51 rep)
Dec 28, 2023, 04:41 PM • Last activity: Mar 21, 2025, 07:34 PM
2 votes
1 answers
301 views
Finding execution time of a psql query (without any connection latency/overhead)
How do I find the execution time of a query? I tried these methods: ``` pgbench - pgbench -n -t 1 -f ./query.sql ``` got: ```lang-none latency average = 9.787 ms ' ``` got: ```lang-none total_exec_time -------------------- 12.242579000000001 (1 row) ``` --- Using `EXPLAIN ANALYZE`: ```lang-none QUER...
How do I find the execution time of a query? I tried these methods:
pgbench - pgbench -n -t 1 -f ./query.sql
got:
-none
latency average = 9.787 ms '
got:
-none
  total_exec_time   
--------------------
 12.242579000000001
(1 row)
--- Using EXPLAIN ANALYZE:
-none
                                           QUERY PLAN                                           
------------------------------------------------------------------------------------------------
   (cost=0.00..0.01 rows=0 width=0) (actual time=0.182..0.182 rows=0 loops=1)
   ->  Result  (cost=0.00..0.01 rows=1 width=230) (actual time=0.034..0.034 rows=1 loops=1)
 Planning Time: 0.021 ms
 Execution Time: 0.195 ms
(4 rows)
--- Now, which of those gives me the actual execution time of the query? Or rather, how can I find out the actual execution time?
Prasanna (21 rep)
Mar 13, 2025, 06:29 AM • Last activity: Mar 13, 2025, 11:15 AM
0 votes
1 answers
492 views
Azure SQL Server Metrics : is 100% CPU percentage is a blocker for all DB operations?
This is with Azure Sql managed Instance, We have observed a case in our database where the CPU percentage is 100% for quite some time but my application are still able to hit & operate over DB, is 100% CPU usage isn't a blocker for DB operations? I see 2 CPU metrics 1) CPU Percentage(MAX) 2) SQL ins...
This is with Azure Sql managed Instance, We have observed a case in our database where the CPU percentage is 100% for quite some time but my application are still able to hit & operate over DB, is 100% CPU usage isn't a blocker for DB operations? I see 2 CPU metrics 1) CPU Percentage(MAX) 2) SQL instance CPU percentage(MAX) what is the difference between them? In my case CPU percentage is 100% but SQL Instance CPU percentage is 50% only enter image description here please share me insights on this metrics and how to translate this.
Manoj (1 rep)
May 27, 2024, 01:29 AM • Last activity: Jan 27, 2025, 01:02 AM
-1 votes
1 answers
123 views
How do I measure the SQL Database responsiveness
I am load testing my web app which is Azure App Server + Azure SQL Database. I am looking at the the Azure data dashboard metrics as we load test it and most measures (DTU percentage, CPU percentage, etc.) average around 0.2% to 0.5% spiking to 2% at times. On the query performance insight the worst...
I am load testing my web app which is Azure App Server + Azure SQL Database. I am looking at the the Azure data dashboard metrics as we load test it and most measures (DTU percentage, CPU percentage, etc.) average around 0.2% to 0.5% spiking to 2% at times. On the query performance insight the worst one is 1.080 seconds, one at 0.860 seconds, and the rest under 0.5 seconds. All of my web pages of interest have either 1 or 2 queries although Entity Frameworks can break 1 query into several. Performance recommendations and Automatic tuning have no suggestions. I'm 99.9% confident that I am indexing everything I should. So here are my questions: 1. What else should I measure? The biggie I think is how long does a query wait to be executed. If a query is 0.0001 seconds to run, but it comes after the 1 second query, then it's 1.0001 before it returns. Is this worth looking at? And if so, where/how do I find this? 2. I think my queries are reduced to what I truly need. The ones that take time are a model that has 4 collections that are populated via a join table. And I need those 4 collections (I do not read other collections in the model I don't need). Is there anything I can do to speed up the the two over 0.5 second queries? When I've looked at it in the past a simple read it in query is no faster being in a stored procedure (and doing that then loses the Entity Framework advantages). Is there anything I should look at doing to speed up queries, outside of the query itself? I have fine-tuned the queries already, so I think I'm now in the "it is what it is" situation.
David Thielen (189 rep)
Jun 25, 2024, 06:03 PM • Last activity: Jan 6, 2025, 10:47 AM
-3 votes
1 answers
161 views
SQL Server statistics logical reads negative number
I am trying to understand and analyse a CPU intensive select query in SQL Server, however, I see something strange which is a negative number as logical reads. I wonder if it is a mathematical overflow issue or it is a meaningful number. I'm using: ```lang-none Microsoft SQL Server 2019 (RTM-CU29-GD...
I am trying to understand and analyse a CPU intensive select query in SQL Server, however, I see something strange which is a negative number as logical reads. I wonder if it is a mathematical overflow issue or it is a meaningful number. I'm using:
-none
Microsoft SQL Server 2019 (RTM-CU29-GDR) (KB5046860) - 15.0.4410.1 (X64)
I used Statistics Parser : enter image description here Here is the raw output: ~~~lang-none Table 'T1'. Scan count 26722, logical reads -571819170, physical reads 194, page server reads 0, read-ahead reads 13211, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0. ~~~
Hossein POURAKBAR (117 rep)
Dec 23, 2024, 09:38 AM • Last activity: Dec 24, 2024, 09:16 AM
0 votes
1 answers
68 views
What are good metrics for determining the usefullness/impact of an index in a SQL server database?
**Background** I'm reviewing a database currently where someone applied indexes to almost everything in the database. Since we are now experiencing performance issues related to updating records, I'm trying to determine which indexes should be kept and which should be removed. Note: I do not have ac...
**Background** I'm reviewing a database currently where someone applied indexes to almost everything in the database. Since we are now experiencing performance issues related to updating records, I'm trying to determine which indexes should be kept and which should be removed. Note: I do not have access to the source code that makes requests to the database, so I do not have a complete picture as to all the requests being made to the database. **Sample Data** Using the query below found in the first answer in the post List of all index & index columns in SQL Server DB use UAT_PLUS go SELECT TableName = t.name ,IndexName = ind.name ,IndexId = ind.index_id ,ColumnId = ic.index_column_id ,ColumnName = col.name --,ind.* --,ic.* --,col.* FROM sys.indexes ind INNER JOIN sys.index_columns ic ON ind.object_id = ic.object_id and ind.index_id = ic.index_id INNER JOIN sys.columns col ON ic.object_id = col.object_id and ic.column_id = col.column_id INNER JOIN sys.tables t ON ind.object_id = t.object_id WHERE ind.is_primary_key = 0 AND ind.is_unique = 0 AND ind.is_unique_constraint = 0 AND t.is_ms_shipped = 0 ORDER BY t.name, ind.name, ind.index_id, ic.is_included_column, ic.key_ordinal; I get results similar to (This is an actual table in the database just with some values generalized) TableName | IndexName | IndexId | ColumnId | ColumnName -- | -- | -- | -- | -- TABLE_A | Index_1 | 183 | 1 | Col_1 TABLE_A | Index_1 | 183 | 2 | Col_2 TABLE_A | Index_1 | 183 | 3 | Col_3 TABLE_A | Index_1 | 183 | 4 | Col_4 TABLE_A | Index_1 | 183 | 5 | Col_5 TABLE_A | Index_2 | 184 | 1 | Col_1 TABLE_A | Index_2 | 184 | 2 | Col_2 TABLE_A | Index_2 | 184 | 3 | Col_3 TABLE_A | Index_2 | 184 | 4 | Col_4 TABLE_A | Index_2 | 184 | 5 | Col_6 TABLE_A | Index_2 | 184 | 6 | Col_5 TABLE_A | Index_2 | 184 | 7 | Col_7 TABLE_A | Index_3 | 4 | 1 | Col_2 TABLE_A | Index_3 | 4 | 2 | Col_4 TABLE_A | Index_4 | 3 | 1 | Col_2 TABLE_A | Index_4 | 3 | 2 | Col_3 TABLE_A | Index_5 | 2 | 1 | Col_2 TABLE_A | Index_6 | 9 | 1 | Col_4 TABLE_A | Index_7 | 10 | 1 | Col_8 TABLE_A | Index_8 | 11 | 1 | Col_9 TABLE_A | Index_9 | 6 | 1 | Col_2 TABLE_A | Index_9 | 6 | 2 | Col_10 TABLE_A | Index_9 | 6 | 3 | Col_3 TABLE_A | Index_9 | 6 | 4 | Col_4 TABLE_A | Index_10 | 5 | 1 | Col_2 TABLE_A | Index_10 | 5 | 2 | Col_4 TABLE_A | Index_10 | 5 | 3 | Col_3 TABLE_A | Index_11 | 163 | 1 | Col_3 TABLE_A | Index_11 | 163 | 2 | Col_10 TABLE_A | Index_12 | 8 | 1 | Col_11 TABLE_A | Index_13 | 15 | 1 | Col_12 TABLE_A | Index_14 | 18 | 1 | Col_13 TABLE_A | Index_15 | 19 | 1 | Col_14 TABLE_A | Index_16 | 14 | 1 | Col_15 TABLE_A | Index_17 | 13 | 1 | Col_16 TABLE_A | Index_18 | 12 | 1 | Col_17 TABLE_A | Index_19 | 17 | 1 | Col_18 TABLE_A | Index_20 | 16 | 1 | Col_19 TABLE_A | Index_21 | 182 | 1 | Col_1 TABLE_A | Index_22 | 7 | 1 | Col_3 TABLE_A | Index_23 | 181 | 1 | Col_4 TABLE_A | Index_23 | 181 | 2 | Col_3 TABLE_A | Index_23 | 181 | 3 | Col_1 TABLE_A | Index_23 | 181 | 4 | Col_2 TABLE_A | Index_24 | 178 | 1 | Col_1 TABLE_A | Index_24 | 178 | 2 | Col_3 TABLE_A | Index_24 | 178 | 3 | Col_2 TABLE_A | Index_24 | 178 | 4 | Col_4 TABLE_A | Index_25 | 180 | 1 | Col_3 TABLE_A | Index_25 | 180 | 2 | Col_1 TABLE_A | Index_25 | 180 | 3 | Col_2 TABLE_A | Index_25 | 180 | 4 | Col_6 TABLE_A | Index_25 | 180 | 5 | Col_4 TABLE_A | Index_25 | 180 | 6 | Col_7 **Question** What are good metrics and tools that I can use to determine which indexes are useful and which are just causing bloat?
Tolure (111 rep)
Nov 23, 2023, 05:36 PM • Last activity: Nov 24, 2023, 01:25 AM
0 votes
1 answers
352 views
Profile Read/Write Traffic to SQL Server
I am tasked with creating a new central database, with an all-new ERD / data schema, to replace a poorly designed database used by a range of applications. To assist with both design and capacity planning, I want to profile all the read/write activity of the existing SQL Server database (preferably...
I am tasked with creating a new central database, with an all-new ERD / data schema, to replace a poorly designed database used by a range of applications. To assist with both design and capacity planning, I want to profile all the read/write activity of the existing SQL Server database (preferably in production) over the space of a day or a month. In particular I want: 1. An identifier of the application which initiated the read or write. That strategy is already discussed in this post , thus it is the next two points I mostly need help with. 2. A report of what "read-oriented" queries/sprocs where executed, in what frequency distribution, and approximately what amount of data was extracted (in KB or MB) for each query. This desired report presumably would focus on "query hashes" for ad hoc queries, given that raw app/user queries are typically converted to an optimal form (and assigned a query hash) before being executed. 3. A report of what "write-oriented" actions were taken (DML or sprocs), along with frequency distribution and data quantities, as before. As stated, I've already found a reasonable-ish answer to #1 above. So it is mostly #2 and #3 that I need. Incidentally, I am interested in answers that include network tooling that can help answer these questions. In other words, if SQL Server is deficient at answering these questions, perhaps a network monitor can sleuth the answers? I also would be curious if other, related cloud databases have a solution(s) for these questions. For example, perhaps this is easier to answer if I'm instead using Azure SQL or Cosmos DB?
Brent Arias (191 rep)
Oct 26, 2022, 02:27 PM • Last activity: Oct 26, 2022, 05:36 PM
2 votes
1 answers
793 views
Report Uptime Metrics for SQL Server by Database/Application
I have a requirement to report monthly metrics on SQL Server uptime, by application/database. This would be at the cluster level. I.E., if a secondary replica went offline, but a primary was still available to process transactions, uptime would still be considered 100%. I've looked at various tools...
I have a requirement to report monthly metrics on SQL Server uptime, by application/database. This would be at the cluster level. I.E., if a secondary replica went offline, but a primary was still available to process transactions, uptime would still be considered 100%. I've looked at various tools that might serve this purpose, and a few come close. But, none of them seem to capture anything more than if the SQL Server service is online and accepting connections. They also fail to aggregate these metrics at the cluster/AG level. Meaning these uptime reports would take a hit if a secondary replica were to go offline. For example, let's say a database goes offline or a log file fills up, and transactions are unable to process against a single database. Those tools would say that SQL Server is up, but I would still have people saying it was a database issue. Thus, these metrics would need to reflect that SQL was not fully up at that time. The best idea of I'm come up with at this point is to create a SQL Agent job that inserts a record into a Canary table in each database, once a minute. Then at the end of the month, to query that table and divide the previous month's row count by the expected row count. I figured there was no better way to prove a database was actually available than to try to insert a row. I already have the above solution developed, tested and working. But I'm curious if anyone knows of a better way to do this. Including any goods tools or DMVs I may have overlooked, that I could use to extrapolate end-user availability metrics of all databases on a SQL Instance?
Brendan McCaffrey (3444 rep)
Mar 4, 2022, 10:14 PM • Last activity: Mar 5, 2022, 12:22 AM
0 votes
1 answers
3119 views
T-SQL to find memory, CPU, and IO utilization
I want to identify resource (CPU, memory and IO) usage by a database on a SQL Server 2005 and above server, over a week or a month. Can anyone please help in this?
I want to identify resource (CPU, memory and IO) usage by a database on a SQL Server 2005 and above server, over a week or a month. Can anyone please help in this?
Manjot (1213 rep)
May 27, 2014, 03:05 AM • Last activity: Aug 6, 2021, 08:50 AM
1 votes
1 answers
3064 views
How to determine queries per hour executed by a particular MySQL user?
I am trying to implement user based quota system in MySQL to limit `MAX_QUERIES_PER_HOUR` for a particular user. I have an option to set this limit via `MAX_UPDATES_PER_HOUR 10`. But my problem is that I want know that how many queries a user has executed this hour. I followed [this thread][1] but t...
I am trying to implement user based quota system in MySQL to limit MAX_QUERIES_PER_HOUR for a particular user. I have an option to set this limit via MAX_UPDATES_PER_HOUR 10. But my problem is that I want know that how many queries a user has executed this hour. I followed this thread but this is for global statistics of dba. I want this information for every user of my database. I am sure that MySQL is storing this schema somewhere but after a lot of scrutinizing some of databases like information_schema , performance_schema , mysql I can't figure out how it will be achieved.
salmanarshad1999 (11 rep)
Nov 23, 2018, 05:57 PM • Last activity: Aug 25, 2020, 09:14 AM
11 votes
2 answers
2213 views
Switching to RCSI
The company I work for currently uses SQL Server databases (latest Enterprise version usually) for a product we develop. I would describe it as an OLTP database that is somewhat equally write and read intensive with a lot of time critical apps. In addition to this, a lot of reporting and graphical d...
The company I work for currently uses SQL Server databases (latest Enterprise version usually) for a product we develop. I would describe it as an OLTP database that is somewhat equally write and read intensive with a lot of time critical apps. In addition to this, a lot of reporting and graphical data is displayed from information in this same OLTP database (separate problem) against a lot of the same tables that are being read from and written to at a frequent rate. We commonly experience problems where blocking occurs and it usually ends up slowing down time critical apps or even causing issues because of deadlocks in these apps. The common solution to this problem often seems to be to throw nolock hints at the problematic queries. I honestly hate this solution and I've felt for a long time that this is the wrong way to try and tackle this problem and from everything I've read I come to the same conclusion. I've been trying to convince my team for a while that RCSI is something that we could definitely benefit from especially given our type of database. They seem to think this is a big risk and often put it off because of the risk factor but we continue to run into performance problems where we just throw nolock hints at it. * How can I help prove that our database can greatly benefit from using RCSI? * Are there performance tests I can run based on an actual production database that we convert to RCSI in a test environment? I'm looking for a good way to show concrete metrics to our team to finally convince them we should potentially move to this methodology.
Cole W (251 rep)
Feb 10, 2017, 08:18 PM • Last activity: May 7, 2020, 08:08 PM
1 votes
2 answers
279 views
Decision concerning RAM limit
What steps would you do to be sure that STANDARD edition with its 128GB RAM limit is enough for your production? My subjective feeling is that we are somehow on the edge and I am looking for strong arguments to do the right decision to the future. We are now running EVALUATION edition (so in fact EN...
What steps would you do to be sure that STANDARD edition with its 128GB RAM limit is enough for your production? My subjective feeling is that we are somehow on the edge and I am looking for strong arguments to do the right decision to the future. We are now running EVALUATION edition (so in fact ENTERPRISE). That gave me the space for testing the influence of having RAM limit (Max Server Memory) set to 128 GB and more (in fact set to 169 GB). Let's call the states before (Max Server Memory = 128 GB) and after (Max Server Memory = 169 GB). While testing I was collecting wait stats and several Memory based counters. In general, wait statistics didn't change much while monitoring before and after the RAM increase. Not even in buffer latch and memory category of wait types. Memory counters provides more interesting values but are they strong enough indicators of lack of memory? **Memory Grant Pending:** value stays = 0 before and after **Total & Target server memory:** keeps close to each other before and after - no spikes **System memory state:** before - it's mostly in "Physical memory state us steady" state after - it goes to "Available physical memory is hight" state, not suprisingly **Page life expectancy:** varies much over time (before and after) before: sometime goes under 300 but in average it is more after: sometimes goes under 300 (but less time) **Batch Reguests/sec & SQL Compilations / sec:** the ratio between those two counters didn't change (before / after) - so there seems to be enought space for plans in the buffer, unfortunately the ratio is still higher than 10%.
jericzech (977 rep)
Mar 15, 2018, 02:50 PM • Last activity: Mar 12, 2020, 10:46 AM
5 votes
1 answers
135 views
Performance Metrics After Enabling Auto-Stats
### Short Version After enabling `AUTO_CREATE_STATISTICS` & `AUTO_UPDATE_STATISTICS_ASYNC` on a database, what performance metric will best display the changes? ### Long Version We just realized that the database for a legacy system has never had the `AUTO_CREATE_STATISTICS` & `AUTO_UPDATE_STATISTIC...
### Short Version After enabling AUTO_CREATE_STATISTICS & AUTO_UPDATE_STATISTICS_ASYNC on a database, what performance metric will best display the changes? ### Long Version We just realized that the database for a legacy system has never had the AUTO_CREATE_STATISTICS & AUTO_UPDATE_STATISTICS_ASYNC turned on (AUTO_UPDATE_STATISTICS was turned on). As you would expect, after turning those on, our applications have performed much better. Response times are significantly better and the users are not waiting for the screen to refresh or pages to load. The database is well indexed & we had already manually created stats where we thought they were needed. Immediately after enabling I checked sys.stats (joined on sys.stats_columns & sys.objects) and it only needed to create stats on three objects. (I have these logged in case quantifying these changes lies with them.) I capture a lot of performance data on this server and I have looked at that data from prior to the change and after, but I have yet to pinpoint where the actual gains are. I have other similar systems where auto-Stats are still turned off for that database. Those systems would also benefit from enabling them, but I am waiting in case there is some metric that I do not yet have, I can add it before making the changes. ### My real question Aside from the user’s experience, how do I quantify the increases made by changing the Auto-Stats settings on a system? SQL Server 2008R2 on on Windows Server 2008.
DBADon (325 rep)
Aug 28, 2019, 05:57 PM • Last activity: Aug 29, 2019, 01:37 AM
15 votes
1 answers
382 views
Metrics to prove or disprove the need for additional DBA's?
We currently have 2 DBA’s, me and one other guy and we both feel the need for an additional resource (or two). I have had the conversation with my direct manager on this topic a couple different times, but am having difficulty in getting the notion sold. Most recently our manager shared with us that...
We currently have 2 DBA’s, me and one other guy and we both feel the need for an additional resource (or two). I have had the conversation with my direct manager on this topic a couple different times, but am having difficulty in getting the notion sold. Most recently our manager shared with us that it is really good to come up with our “soft values” list of things that we want or feel we need to be doing, but what will really help is to come up with “hard values”, showing dollar savings. Could you help me in understanding first of all if our perceived need is legitimate, and if so, how do we go about coming up with this “hard values” list. I feel like we could come up with those things that we need to do in order to save the company money, but that could just get put at the front of our priorities list, and that much more stress on us in the process. To give you a high level look at our environment, here are the things that I am looking at in terms of what we are providing support for. >Production servers – 30+ Production Databases – 200+ SQL Versions – 2008/2008R2/2012, looking into 2014 now, and 2016 later in the year Applications(3rd party and home grown) – 20+ Application Teams Supported – 6 Virtualization – 75% virtualized 25% physical Clusters – 3+ more planned Replication – 1 distribution server, 2 subscription servers, 24 subscriptions, 6 publication servers, 12 publications Log shipping – 8 primaries +more planned, 4 secondaries + more planned, 49 log shipped databases + more planned Availability Groups – None at the moment, but exploring the possibility Average application upgrade/installs per year that drive version change or DBA resources – 2-3 Patching (SP, CU’s) – Non-existent at the moment unless an issue comes up Cumulative database size – 14TB+ Reporting Servers – One scale-out deployment consisting of 2 servers, neither of us are well versed in SSRS Analysis Server – Two servers, neither of us are well versed in SSAS What metrics should we use to prove or disprove the need for additional DBA's? My initial intention was demonstrating our current capacity vs. our planned projects and repeating tasks, but we need to demonstrate the ROI on hiring additional resources.
Jason Davis (193 rep)
Jan 27, 2016, 07:15 PM • Last activity: Jun 27, 2019, 09:49 AM
1 votes
1 answers
74 views
Pages Input per Second Spiking Daily
I'm still learning a lot about SQL Server and pretty green, but doing the best I can as an accidental DBA. We have recently purchased a SQL monitoring software package, and after adjusting the threshold on alerting so I'm not bombarded with false positive warnings, I noticed that the pages input per...
I'm still learning a lot about SQL Server and pretty green, but doing the best I can as an accidental DBA. We have recently purchased a SQL monitoring software package, and after adjusting the threshold on alerting so I'm not bombarded with false positive warnings, I noticed that the pages input per second spikes every day at exactly the same time: 6:20:30 PM. The only variation in time over the last week was just last night, when it alerted again at 6:19:39 PM.... very very close to the same time! I have checked the server job schedules and it appears there are no jobs scheduled around that time. The earliest job we have kicking off is at 9pm daily, so that's not it. Can anyone tell me what I can look at (or for) that will help me find what is causing this to happen at exactly the same time every day? Thanks much!
Isaac A. (11 rep)
Nov 14, 2018, 01:27 PM • Last activity: Nov 14, 2018, 02:05 PM
0 votes
2 answers
582 views
Log flushes per second appears low
I've newly taken over administration of a system with about 25 databases. Using Redgate Monitor to chart some metrics, I see that each day, one database shows values for log flushes / second that look like a "normal" chart - a zig-zag line throughout the day. Redgate explains the metric: "A log flus...
I've newly taken over administration of a system with about 25 databases. Using Redgate Monitor to chart some metrics, I see that each day, one database shows values for log flushes / second that look like a "normal" chart - a zig-zag line throughout the day. Redgate explains the metric: "A log flush occurs when a transaction is committed and data is written from the log cache to the physical transaction log file." All well and good - work is happening and data is getting flushed to disk as it commits. My question is - why do all the other databases fail to register anything much above the x-axis at all? Redgate also explains: "Log flushes per second should generally correlate with the number of transactions per second." When I view transactions per second, plenty of databases are showing zig-zag chart lines - so what's going on that only one of them is showing activity for log flushes? Does this mean all the other databases are failing to commit? All user databases are using full recovery model. (I also note between 2am and 5am there are huge spikes in log flushes for almost all databases - still to investigate the cause of these).
youcantryreachingme (1655 rep)
Oct 16, 2018, 12:17 AM • Last activity: Oct 16, 2018, 12:06 PM
0 votes
1 answers
1549 views
DBA_TABLESPACE_USAGE_METRICS is empty
For one of my 11.2g databases, `select * from dba_tablespace_usage_metrics` *returns no rows*. Why is that and how can I fix it? Thank you in advance.
For one of my 11.2g databases, select * from dba_tablespace_usage_metrics *returns no rows*. Why is that and how can I fix it? Thank you in advance.
Jeff Bauersfeld (335 rep)
Sep 11, 2017, 03:30 PM • Last activity: Sep 11, 2017, 04:10 PM
3 votes
1 answers
68 views
Easy way to get DB Measurements?
My senior management would like to have some counts of database objects (databases, tables, rows, stored procedures, etc) to show complexity to the executive committee. Does anyone know of a quick way to get this information? I'm going after system tables to get the info and some use of sp_MSforeach...
My senior management would like to have some counts of database objects (databases, tables, rows, stored procedures, etc) to show complexity to the executive committee. Does anyone know of a quick way to get this information? I'm going after system tables to get the info and some use of sp_MSforeachdb, but I'm hoping maybe someone has a query stored somewhere they can share. Of course it's needed by tomorrow and it's already 3:43 pm. We're using Microsoft SQL Server 2008 R2. And if anyone has ever been in this boat before, is there anything else your management wanted to see? TIA
Kellyleia (31 rep)
May 10, 2017, 07:43 PM • Last activity: Aug 12, 2017, 02:45 AM
Showing page 1 of 20 total questions