Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
153 views
Log to disk all queries that waited for a lock over a period of time
I manage a mariadb 10.4.17 server, and I'm having the issues with a table, it seems randomly I will get lots of queries "waiting for metadata lock" causing my entire database to be useless. I usually have to restart my server to get the database running again. I'm looking for methods that can consis...
I manage a mariadb 10.4.17 server, and I'm having the issues with a table, it seems randomly I will get lots of queries "waiting for metadata lock" causing my entire database to be useless. I usually have to restart my server to get the database running again. I'm looking for methods that can consistently log to disk all the queries who can use problems, without affecting performance too much. I like something like this > watch -n 0.5 'mysqladmin -u root -ppassword "processlist"' > log.txt But I don't know how to order by state. Anyway I'm open to any ideas. Looking for something I can look at to see what happened in the past because when the issue happens I just want to restart the db to get back online asap and don't have time to dig the root of the issue
Freedo (208 rep)
Jan 17, 2021, 10:19 AM • Last activity: Jul 18, 2025, 11:10 PM
0 votes
2 answers
237 views
Separate tables for logging?
I would like to log new products, updates to products and deletions of products (and the same for other schemata [customers, locations etc]). Would it be best to have a separate table (e.g. ProductLog, CustomerLog) for this, so that I can have a product foreign key as a field in the ProductLog, cust...
I would like to log new products, updates to products and deletions of products (and the same for other schemata [customers, locations etc]). Would it be best to have a separate table (e.g. ProductLog, CustomerLog) for this, so that I can have a product foreign key as a field in the ProductLog, customer foreign key in the Customer Log etc? Or should I use one table to avoid creating essentially double the amount of tables?
Basil (131 rep)
Jan 27, 2023, 03:29 PM • Last activity: Jun 4, 2025, 06:03 PM
1 votes
2 answers
2854 views
How can I force the MySQL client to output to stdout or a log file when importing a SQL file?
Let's say I am importing a small SQL file using the MySQL client like so: ``` mysql < 10k-file.sql ``` The mysql client does not output anything. How can I force it to show how many rows are affected for each query, errors, so on?
Let's say I am importing a small SQL file using the MySQL client like so:
mysql < 10k-file.sql
The mysql client does not output anything. How can I force it to show how many rows are affected for each query, errors, so on?
bteo (111 rep)
Jun 24, 2020, 08:33 AM • Last activity: Jun 3, 2025, 09:06 AM
1 votes
1 answers
237 views
MongoDB as a log storage. Choosing shard key
I'm designing a log storage system based on MongoDB. I want to shard a log collection to increase ingestion and capacity (distribute writes to several machines) while allow fast search. I should be able to increase ingestion by adding more nodes to the cluster. My collection has following fields: **...
I'm designing a log storage system based on MongoDB. I want to shard a log collection to increase ingestion and capacity (distribute writes to several machines) while allow fast search. I should be able to increase ingestion by adding more nodes to the cluster. My collection has following fields: **Subsystem** - string, name of the application. E.g: "SystemA", "SystemB". ~ 100 unique values. **Tenant** - string, the name of the deployment. It's used to separate logs from different application deployments / environments. E.g: "South TEST", "North DEV", "South PROD", "North PROD". ~ 20 unique values. **Date** - timestamp. **User** - string. **SessionId** - guid, logically groups several related log records. **Data** - BLOB, contains zipped data. Average size = 2Kb, maximum = 8Mb. **Context** - array of key/value pairs. Both key and value are strings. It's used to store additional metadata associated with event. The search could be performed by any combination of fields Subsystem, Date, User and Context. Tenant almost always will be specified. The question is - **what shard key and sharding strategy will be better in that case?** My suggestions: The simplest case is to shard by Tenant, but it will cause highly uneven data distribution, because PROD environments generates much more logs than DEV. "Tenant + Subsystem" seems to be better but still there are subsystems that generates much more logs than other subsystems. And also subsystem is not mandatory - user can omit subsystem during search and search query will be broadcasted. "SessionId" will cause even data distribution but search requests will be broadcasted to all nodes.
Philipp Bocharov (21 rep)
Jul 3, 2018, 12:13 PM • Last activity: May 27, 2025, 12:11 PM
2 votes
2 answers
128 views
PostgreSQL: Finding last executed statement from an "idle in transaction timeout"
Is there any way to log or find last executed statement from a session that terminated because of `idle in transaction timeout`? We only have `slow statement logging` and that did not capture it and we don't want to enable `all statement logging` as well as this will have bad impact on performance.
Is there any way to log or find last executed statement from a session that terminated because of idle in transaction timeout? We only have slow statement logging and that did not capture it and we don't want to enable all statement logging as well as this will have bad impact on performance.
goodfella (595 rep)
Apr 30, 2025, 03:45 AM • Last activity: May 2, 2025, 04:12 AM
1 votes
4 answers
1760 views
PostgreSQL logging: Statements splits into multiple lines
In PostgreSQL v10 and above, I activated the logging of the statements using the extension pg_stat_statements. My configuration: logging_collector = on log_line_prefix = '%t [%p]: [%l-1] db=%d,user=%u,app=%a,client=%h ' log_destination = 'stderr,syslog' log_statement = all If I execute a simple quer...
In PostgreSQL v10 and above, I activated the logging of the statements using the extension pg_stat_statements. My configuration: logging_collector = on log_line_prefix = '%t [%p]: [%l-1] db=%d,user=%u,app=%a,client=%h ' log_destination = 'stderr,syslog' log_statement = all If I execute a simple query: postgres=# select current_timestamp; In the log it will show the prefix and and the statement, something like this: Line 1: (prefix) statement: select current_timestamp; However, if I have a query that is split in multiple lines, for example: select current_timestamp; In the log it will show 2 lines: Line 1: (prefix) select Line 2: current_timestamp; This "current_timestamp" is isolated, doesn't have the prefix and there is no way I can match the line with the "select" part. How can I configure so that multiple line statements are shown in a single line in the log? Like this: Statement: select current_timestamp; Log: Line 1: (prefix) statement: select current_timestamp; I've test changing to csvlog and I got the same result. Why would I need this? I configured the logs to be sent to a central database for auditing purpose. These logs are exposed via Kiabana dashboards. Several lines in the log for one statement, specially if the lines (except the first one) doesn't have the prefix, it is hard to find the full statement. Thank you.
dcop7 (29 rep)
Mar 22, 2022, 02:47 PM • Last activity: Apr 19, 2025, 08:06 PM
0 votes
1 answers
119 views
How can I make CDC stop capturing a seemingly infinite amount of agent job history?
My most important production server uses Change Data Capture (CDC) a lot, maybe for about 25 tables. The relevant agent job shows well over 1,000 steps as in progress. Because they are in progress, they do not respect the option in SQL Server Agent's GUI called "Maximum job history rows per job". Th...
My most important production server uses Change Data Capture (CDC) a lot, maybe for about 25 tables. The relevant agent job shows well over 1,000 steps as in progress. Because they are in progress, they do not respect the option in SQL Server Agent's GUI called "Maximum job history rows per job". They do respect the other setting in that same GUI, "Maximum job history log size (in rows)". However, increasing this setting by several thousand has not been enough to cure this problem. How, then, can I make CDC stop capturing a seemingly infinite amount of agent job history? My only idea so far has been to write a custom script to wipe the records of CDC jobs that are currently in progress. Given that the jobs are currently in progress, this seems like a stupid and dangerous idea. I must assume that this is a solved problem. CDC is from [something like 2008](https://techcommunity.microsoft.com/blog/sqlserver/change-data-capture---what-is-it-and-how-do-i-use-it/383694) and SQL Server Agent [is ancient](https://softwareengineering.stackexchange.com/questions/456467/does-sql-server-agent-predate-windows-task-scheduler) . Failure to solve this problem will leave me without agent job history for a lot of my jobs.
J. Mini (1237 rep)
Feb 7, 2025, 09:41 PM • Last activity: Mar 28, 2025, 02:06 PM
1 votes
2 answers
78 views
How to log queries to a selected table in MySql
I want to log queries to just one table. I have found these ``` SET GLOBAL log_output = "FILE"; which is set by default. SET GLOBAL general_log_file = "/path/to/your/logfile.log"; SET GLOBAL general_log = 'ON'; ``` However this logs all queries. There are questions on SO, but they are for total logg...
I want to log queries to just one table. I have found these
SET GLOBAL log_output = "FILE"; which is set by default.
SET GLOBAL general_log_file = "/path/to/your/logfile.log";
SET GLOBAL general_log = 'ON';
However this logs all queries. There are questions on SO, but they are for total logging. [How to show the last queries executed on MySQL?](https://stackoverflow.com/questions/650238/how-to-show-the-last-queries-executed-on-mysql) [Log all queries in mysql](https://stackoverflow.com/questions/303994/log-all-queries-in-mysql)
Rohit Gupta (2126 rep)
Feb 21, 2025, 04:11 PM • Last activity: Feb 24, 2025, 08:27 PM
1 votes
2 answers
1638 views
Auto logging / tracking records changes at MS SQL
how can I set automatic recording all changes that were done at tables records (Updates) and to get some info / logging summery? thanks
how can I set automatic recording all changes that were done at tables records (Updates) and to get some info / logging summery? thanks
Dmitri Michlin (11 rep)
Feb 10, 2021, 08:08 AM • Last activity: Feb 12, 2025, 03:12 AM
2 votes
1 answers
330 views
Log every time users access a certain table in Postgres
Recently we changed our production database `log_statement` from 'all' to 'mod', because the resulting log file was too large for our available storage. Unfortunately, we still need to log every `SELECT` made by users to a specific table N for audit purposes. Is there any solution for that? I have t...
Recently we changed our production database log_statement from 'all' to 'mod', because the resulting log file was too large for our available storage. Unfortunately, we still need to log every SELECT made by users to a specific table N for audit purposes. Is there any solution for that? I have tried using pgaudit and pg_stat_statements without any success. We have set pgaudit.log to read, but it logs every SELECT query instead of just selects to table N.
Ilham Syamsuddin (35 rep)
Dec 10, 2024, 05:24 AM • Last activity: Dec 10, 2024, 10:22 AM
0 votes
1 answers
494 views
How do I make PostgreSQL log the file path and line number of the errors? (If it's even possible at all.)
My `postgresql.conf` contains this: log_destination = 'csvlog' logging_collector = on log_directory = 'C:\\Users\\John Doe\\Documents\\PostgreSQL logs' log_filename = 'PG_%Y-%m-%d_%H;%M;%S' log_rotation_age = 1min log_rotation_size = 0 log_truncate_on_rotation = on log_min_error_statement = 'info' T...
My postgresql.conf contains this: log_destination = 'csvlog' logging_collector = on log_directory = 'C:\\Users\\John Doe\\Documents\\PostgreSQL logs' log_filename = 'PG_%Y-%m-%d_%H;%M;%S' log_rotation_age = 1min log_rotation_size = 0 log_truncate_on_rotation = on log_min_error_statement = 'info' This makes PG log its errors to CSV files in my custom dir. Then, I have a constantly running script which looks for new files in that dir, and if it finds any, COPYies it into a custom database table (as described/recommended in the manual) and deletes the file that was successfully added. It doesn't touch the currently active log file. My table only has one custom column, a boolean called "unimportant", which I set depending on whether I think the error is important or not, so that I can hide the noise in various views and statistics. Since PostgreSQL does not provide any fields such as "file path" or "line number", I'm at a total loss as to what caused the various logged errors. The only thing I have to go by is the application_name, which is uselessly limited due to being 64 characters maximum and no Unicode, even preventing me from abusing this field to, for example, set it to the relevant file path. But even if it were possible, I wouldn't want to do that, since application_name is supposed to be the application name and nothing else. (But again, it doesn't matter since the path would be too long and contain non-ANSI characters anyway.) I've thought long and hard about this, but I just can't figure out a way to make me able to know where exactly the error occurred. Application (PHP) errors will include the relevant location, but these errors are not always logged (I'm unsure why). Here are some examples of errors which don't trigger PHP errors but are logged by PostgreSQL: * there is no transaction in progress for the query COMMIT in application_name whatever123. Okay? Which script did it? And which line? * there is already a transaction in progress for the query BEGIN in application_name whatever123. Okay? Which script did it? And which line? * current transaction is aborted, commands ignored until end of transaction block and deadlock detected ones at least specify what query caused it, but again, I'd have to hunt it down manually by searching for parts of the query. Usually, I don't bother, or am unable to because the query was constructed in code in an unsearchable way. Is there really no way to solve this? Any idea why the above errors aren't considered "real" errors to PHP (or rather, the pg_*/pglib functions)? And can I control that somehow?
user15080516 (745 rep)
Feb 22, 2021, 09:40 PM • Last activity: Nov 13, 2024, 10:02 AM
3 votes
1 answers
10659 views
Can't create/write to file '/var/log/mariadb/mariadb.log'
I'm troubleshooting a MySQL no-start on CentOS 7 VM. It is a bit unusual to see this since we have not made any configuration changes. The manual start command was taken from systemd logging. The error is the age old: # /usr/bin/mysqld_safe --basedir=/usr 181221 17:42:49 mysqld_safe Logging to '/var...
I'm troubleshooting a MySQL no-start on CentOS 7 VM. It is a bit unusual to see this since we have not made any configuration changes. The manual start command was taken from systemd logging. The error is the age old: # /usr/bin/mysqld_safe --basedir=/usr 181221 17:42:49 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'. 181221 17:42:49 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql /usr/bin/mysqld_safe_helper: Can't create/write to file '/var/log/mariadb/mariadb.log' (Errcode: 13) According to [Red Hat Bugzilla – Bug 1061045](https://bugzilla.redhat.com/show_bug.cgi?id=1061045) , mysql:mysql needs the appropriate permissions, which we seem to have: [root@ftpit w]# ls -Al /var/log total 52764 ... drwxrwx--- 2 mysql mysql 4096 Aug 16 11:05 mariadb And: [root@ftpit w]# ls -Al /var/log/mariadb/ total 99852 -rw-rw---- 1 mysql mysql 102138776 Dec 21 17:16 mariadb.log And: [root@ftpit w]# grep mysql /etc/passwd mysql:x:27:27:MariaDB Server:/var/lib/mysql:/sbin/nologin I also tried g-w as shown in the bug report. Where should I look next for this error? Or how do I fix it?
user141074
Dec 21, 2018, 10:51 PM • Last activity: Oct 30, 2024, 11:28 AM
0 votes
1 answers
62 views
MySQL logging verbosity level not taking effect
Our MySQL configs are as below: @@version - 5.7.23 @@log_warnings - 2 @@log_error_verbosity - 3 However, the *notes* getting logged are not verbose enough and like below 2024-09-26T08:56:39.519313Z 6624396 [Note] 2024-09-26T08:56:39.519423Z 6622841 [Note] 2024-09-26T08:56:39.519471Z 6624696 [Note] 2...
Our MySQL configs are as below: @@version - 5.7.23 @@log_warnings - 2 @@log_error_verbosity - 3 However, the *notes* getting logged are not verbose enough and like below 2024-09-26T08:56:39.519313Z 6624396 [Note] 2024-09-26T08:56:39.519423Z 6622841 [Note] 2024-09-26T08:56:39.519471Z 6624696 [Note] 2024-09-26T08:56:39.519532Z 6623360 [Note] 2024-09-26T08:56:39.552200Z 6624397 [Note] The mysql user has required write permissions on the log file and the system has enough space available.
NKP (1 rep)
Oct 8, 2024, 03:04 AM • Last activity: Oct 8, 2024, 07:51 AM
6 votes
1 answers
524 views
What does SQL Server's Error Log actually log?
If you have ever played around with the `error_reported` Extended Event, then it becomes very obvious that the SQL Server Error Log only logs a tiny fraction of the errors that are thrown by code running on your server. Indeed, many things that it logs are explicit about not even being errors. So, b...
If you have ever played around with the error_reported Extended Event, then it becomes very obvious that the SQL Server Error Log only logs a tiny fraction of the errors that are thrown by code running on your server. Indeed, many things that it logs are explicit about not even being errors. So, broadly speaking, what determines if an error is worthy of inclusion in the Error Log? Is there some list of errors that qualify? Or is it determined based on some criteria?
J. Mini (1237 rep)
May 17, 2024, 11:15 PM • Last activity: May 20, 2024, 03:30 PM
0 votes
0 answers
195 views
MariaDB Docker image: How to get meaningful log levels (everything is logged as error)?
I'm using the MariaDB docker image with podman for a Nextcloud instance. Everything works, but when I look at the logs (with podman logs or journalctl), everything is logged at log level error. It seems to me that the container just writes everything to stderr which journald on the host then picks u...
I'm using the MariaDB docker image with podman for a Nextcloud instance. Everything works, but when I look at the logs (with podman logs or journalctl), everything is logged at log level error. It seems to me that the container just writes everything to stderr which journald on the host then picks up at the level error, even though the messages themselves are mostly harmless such as: 2024-04-22 21:46:51 0 [Note] Starting MariaDB 10.11.7-MariaDB-1:10.11.7+maria~ubu2204-log source revision 87e13722a95af5d9378d990caf48cb6874439347 as process 1 How can I get the container to log at appropriate levels? E.g. by only sending actual errors to stderr but all informational messages or warnings to stdout?
Timo (11 rep)
Apr 26, 2024, 08:23 PM • Last activity: Apr 27, 2024, 12:14 PM
0 votes
0 answers
25 views
LogBACKUPFull, error description :-2147217900
When I'm going to Configure a new Proximity Sensor Access Card through the Netxs control software in My SQL Server, I get this error > error found in 'frmEmployee.cmdSave_Click' routine at line number:133 > Error description :-2147217900 - The transaction log for database 'netxs' is full due to 'LOG...
When I'm going to Configure a new Proximity Sensor Access Card through the Netxs control software in My SQL Server, I get this error > error found in 'frmEmployee.cmdSave_Click' routine at line number:133 > Error description :-2147217900 - The transaction log for database 'netxs' is full due to 'LOG_BACKUP'. > The transaction log for database 'netXs' is full due to 'LOG_BACKUP'. > error description :-2147217900 How can I resolve this error from my SQL Server?
Mohamed Bilal
Apr 2, 2024, 10:55 AM • Last activity: Apr 2, 2024, 11:07 AM
8 votes
1 answers
887 views
Disable full-text logging (SQL Server)
Any way to disable FT logs completely? I spent hours googling - no luck. I get tons of "informational" messages literally *every second*. 2020-01-01 10:43:16.48 spid33s Informational: Full-text Auto population initialized for table or indexed view xxx 2020-01-01 10:43:23.48 spid34s Informational: Fu...
Any way to disable FT logs completely? I spent hours googling - no luck. I get tons of "informational" messages literally *every second*. 2020-01-01 10:43:16.48 spid33s Informational: Full-text Auto population initialized for table or indexed view xxx 2020-01-01 10:43:23.48 spid34s Informational: Full-text Auto population completed for table or indexed view zzz 2020-01-01 10:43:23.48 spid36s Informational: Full-text Auto population completed for table or indexed view xxx 2020-01-01 10:43:24.64 spid12s Informational: Full-text Auto population initialized for table or indexed view xxx 2020-01-01 10:43:25.64 spid12s Informational: Full-text Auto population completed for table or indexed view xxx 2020-01-01 10:43:26.58 spid36s Informational: Full-text Auto population initialized for table or indexed view xxx 2020-01-01 10:43:26.98 spid17s Informational: Full-text Auto population initialized for table or indexed view xxx P.S. My cloud hosting provider charges me for "i/o ops per second" so this is something I want disabled. Also, these logs grow really fast, several gigabytes every week, so I had to write a log maintenance script (rollover + archive etc.)
jitbit (765 rep)
Jan 2, 2020, 08:39 PM • Last activity: Mar 16, 2024, 02:49 PM
0 votes
1 answers
100 views
Why is mycli history file appearing in home directory?
I started using [mycli](https://github.com/dbcli/mycli) and want `$HOME/.mycli-history` to be generated and updated from `$HOME/.cache/mycli/.mycli-history` 1. Moved `$HOME/.myclirc` -> `~/.config/mycli/.myclirc` 1. Created `alias mycli='sudo mycli --myclirc ~/.config/mycli/.myclirc'` 1. Updated `lo...
I started using [mycli](https://github.com/dbcli/mycli) and want $HOME/.mycli-history to be generated and updated from $HOME/.cache/mycli/.mycli-history 1. Moved $HOME/.myclirc -> ~/.config/mycli/.myclirc 1. Created alias mycli='sudo mycli --myclirc ~/.config/mycli/.myclirc' 1. Updated log_file = ~/.cache/mycli/.mycli.log in .myclirc 1. Moved $HOME/.mycli.log to ~/.cache/mycli/.mycli.log These steps were successful for the rc and log files. I did not get much from searching online or their Documentation, but saw [this answer](https://github.com/dbcli/mycli/issues/647#issue-357617425) and added export MYCLI_HISTFILE="~/.cache/mycli/.mycli-history" to my .zshrc. After restarting my shell, my history file is still being populated in $HOME. Is there any command line flag, environment variable, or configuration that can change this? If not, would my only option be to symlink it? Using mycli v1.27.0 in zsh on MacOS Sonoma 14.2.1
Vivek Jha (155 rep)
Mar 1, 2024, 08:18 PM • Last activity: Mar 3, 2024, 04:25 PM
1 votes
2 answers
1607 views
How can I save the job result to a file?
I created a job in SQL Server. This job runs the following statement at regular intervals: ``` Select NationalIDNumber from HumanResources.Employee where BusinessEntityID = '1' ``` I want to save the result of this query to a txt file. I clicked **Advanced** on the Step tab for the job I created. I...
I created a job in SQL Server. This job runs the following statement at regular intervals:
Select NationalIDNumber from HumanResources.Employee where BusinessEntityID = '1'
I want to save the result of this query to a txt file. I clicked **Advanced** on the Step tab for the job I created. I wrote the path there, but when the job was run, no records were added to the txt file. What else should I do to add the job result to txt? Advanced Tab : Output File --> D:\Data2\JobOutput.txt
Merve (57 rep)
Jan 3, 2024, 06:47 AM • Last activity: Jan 16, 2024, 02:48 PM
1 votes
2 answers
429 views
What is the fastest way to log the Input and Output parameters when executing Stored Procedure
Normally i will log this information at the end of each SP. The syntax looks like this. ``` DECLARE @SPContent nvarchar(4000)= ( SELECT @ServiceID [ServiceID], @AccountID [FID], @AccountName [Username], @TopupAmount [TopupAmount], @TopupDesc [Description], @TransID [ReferenceID], @SourceID [SourceID...
Normally i will log this information at the end of each SP. The syntax looks like this.
DECLARE @SPContent nvarchar(4000)= (
			SELECT  @ServiceID [ServiceID],
					@AccountID [FID],
					@AccountName [Username],
					@TopupAmount [TopupAmount],					
					@TopupDesc [Description],
					@TransID [ReferenceID],
					@SourceID [SourceID],					
					@TopupIP [ClientIP],					
					@AccountBalance [FunBalance],
					@ResponseStatus_Deduct [ResponseStatus],
					@Now [FromTime]
			FOR XML RAW)

			INSERT INTO [dbo].[InputLogs]
				([SPName]
				,[SPContent]
				,[CreatedTime]
				,[Username])
			SELECT  'SP_Balance_Deduct_Game',
					@SPContent,
					GETDATE(),
					SUSER_NAME()
but i'm wondering if there is a optimal script to do this automatically where i only need to pass in the name of the SP as the only parameter
Tạ Đức (11 rep)
Nov 4, 2022, 09:51 AM • Last activity: Jan 16, 2024, 12:06 PM
Showing page 1 of 20 total questions