Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
1060 views
Unable to show binary logs how to remove logs from mysql?
Server version: 5.1.50-log MySQL Community Server (GPL) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show binary logs ; ##PROBLEM - Data directory is full - I want to remove space - Purge is not making more space. Any idea what to do to clear relay logs and Bin file .
Server version: 5.1.50-log MySQL Community Server (GPL) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show binary logs ; ##PROBLEM - Data directory is full - I want to remove space - Purge is not making more space. Any idea what to do to clear relay logs and Bin file .
Gaurav Jain (3 rep)
Jun 24, 2015, 05:34 AM • Last activity: Aug 3, 2025, 09:03 AM
14 votes
3 answers
8142 views
How do I hide sensitive information like plaintext passwords from the logs?
I do not have access to a Postgres installation, so I cannot check. I am a security guy, and I'm seeing plaintext passwords in the logs: create user user1 with password 'PLAINTEXT PASSWORD' How can the DBAs change or create their passwords without the password in the clear in the logs? I've seen [th...
I do not have access to a Postgres installation, so I cannot check. I am a security guy, and I'm seeing plaintext passwords in the logs: create user user1 with password 'PLAINTEXT PASSWORD' How can the DBAs change or create their passwords without the password in the clear in the logs? I've seen this , which states you can use an md5 hash of the password, but then the hash is also in the clear. Is there a better way?
schroeder (242 rep)
Mar 6, 2015, 08:55 PM • Last activity: Jul 17, 2025, 10:10 AM
1 votes
1 answers
192 views
SQL Server log for revoke statements
I am trying to figure if **SQL Server** has a feature to log when a user fires statements as revoke, delete from, etc. It would be great to see which user has fired the command, date and time, and which command has been run. Is there a way to accomplish this in **SQL Server 2008**? I have seen [this...
I am trying to figure if **SQL Server** has a feature to log when a user fires statements as revoke, delete from, etc. It would be great to see which user has fired the command, date and time, and which command has been run. Is there a way to accomplish this in **SQL Server 2008**? I have seen this but it is not what I am looking for.
Junior Mayhe (337 rep)
Oct 30, 2014, 07:03 PM • Last activity: Jun 26, 2025, 04:10 PM
0 votes
1 answers
532 views
postgresql reindex frequency and monitoring
What's up guys. I have a couple of tables where i inserting much of data (and almost never deleting). Periodically i see that insert gains ExclusiveLock on index of table. So i guess that sometimes insert triggers reindex of table. So i wonder how often it happens and how can i see in logs how much...
What's up guys. I have a couple of tables where i inserting much of data (and almost never deleting). Periodically i see that insert gains ExclusiveLock on index of table. So i guess that sometimes insert triggers reindex of table. So i wonder how often it happens and how can i see in logs how much time it took and how often it is triggered? Maybe some blogs/docs on it ? thanks in advance.
ArtemP (69 rep)
Jul 6, 2016, 09:07 AM • Last activity: Jun 7, 2025, 11:04 AM
0 votes
1 answers
322 views
convert RDS csvlog into pgreplay-go compatible for replaying
Before upgrading AWS RDS PostgreSQL version at production, I want to replay 1-hour recorded logs at the test instance. Unfortunately [pgreplay](https://github.com/laurenz/pgreplay) is single-threaded application, limited by single CPU speed and its replays are failing with some weird errors after a...
Before upgrading AWS RDS PostgreSQL version at production, I want to replay 1-hour recorded logs at the test instance. Unfortunately [pgreplay](https://github.com/laurenz/pgreplay) is single-threaded application, limited by single CPU speed and its replays are failing with some weird errors after a few minutes working. On the other side, [pgreplay-go](https://github.com/gocardless/pgreplay-go) requires a different input log format: log_line_prefix='%m|%u|%d|%c| RDS csvlog sample:
2021-09-09 17:00:00.006 UTC,"user","database",27752,"172.30.1.2:34106",613a286d.6c68,13992,
"SELECT",2021-09-09 15:29:49 UTC,229/3866470,0,LOG,00000,
"execute : SELECT ""jobs"".* FROM ""jobs"" WHERE ""jobs"".""deleted_at"" IS NULL
AND ""jobs"".""user_id"" = $1","parameters: $1 = '124765'",,,,,,,,"bin/rails"
pgreplay-go log sample:
2010-12-31 10:59:57.870 UTC|postgres|postgres|4d1db7a8.4227|
LOG:  execute einf"ug: INSERT INTO runtest (id, c, t, b) VALUES ($1, $2, $3, $4)
2010-12-31 10:59:57.870 UTC|postgres|postgres|4d1db7a8.4227|
DETAIL:  parameters: $1 = '6', $2 = 'mit    Tabulator', $3 = '2050-03-31 22:00:00+00', $4 = NULL
Should I just convert one RDS csvlog line into two pgreplay-go lines and that's it? Log parsing logic in pgreplay-go is not trivial as you can see at https://github.com/gocardless/pgreplay-go/blob/master/pkg/pgreplay/parse.go#L220 So I'm not sure that just split of the one line into two lines will be sufficient. Should I add something else maybe?
mva (111 rep)
Sep 17, 2021, 05:56 AM • Last activity: Apr 28, 2025, 04:04 PM
0 votes
1 answers
662 views
auto_explain does not log
looks like i do it wrong. I turned auto_explain on with adding next valued to postgresql.conf: shared_preload_libraries = 'pg_stat_statements,pg_wait_sampling,pg_stat_kcache,auto_explain' auto_explain.log_min_duration = '3s' in my monitoring I can see long running queries that exceeded limit of 3s a...
looks like i do it wrong. I turned auto_explain on with adding next valued to postgresql.conf: shared_preload_libraries = 'pg_stat_statements,pg_wait_sampling,pg_stat_kcache,auto_explain' auto_explain.log_min_duration = '3s' in my monitoring I can see long running queries that exceeded limit of 3s and successfully finished. 'values' here says how long query lasted: from my grafana monitoring But when I look into my log 'postgresql-Mon.log' (and next days) there is nothing about those queries. Only a few strings about modules during database startup process. Am I missing something?
Mikhail Aksenov (430 rep)
Dec 10, 2019, 11:16 AM • Last activity: Apr 3, 2025, 03:11 PM
79 votes
6 answers
253683 views
Disable MySQL binary logging with log_bin variable
Default MySQL config file /etc/mysql/my.cnf installed by some debian package using APT often set log_bin variable, so binlog are enabled: log_bin = /var/log/mysql/mysql-bin.log When I want to disable binary logging on such installation, comment out the line in my.cnf works of course, but I wonder if...
Default MySQL config file /etc/mysql/my.cnf installed by some debian package using APT often set log_bin variable, so binlog are enabled: log_bin = /var/log/mysql/mysql-bin.log When I want to disable binary logging on such installation, comment out the line in my.cnf works of course, but I wonder if there is a way to disable binary logging by setting explicitely log_bin to OFF, in a debian style, I mean in an included file like /etc/mysql/conf.d/myCustomFile.cnf, so default my.cnf is not changed and can be easily updated by apt, if necessary. I tried "log_bin = 0", "log_bin = OFF" or "log_bin =" but none works...
Nicolas Payart (2508 rep)
Jul 30, 2014, 04:01 PM • Last activity: Mar 27, 2025, 08:21 PM
6 votes
2 answers
28050 views
how to list all features installed in a sql server instance?
There are other Installation related questions with similar details [here][1] and [here][2]. I could not find anything related to `listing the installed features in sql server 2016` though. I can do it manually looking at the logs - as per the info below, but I would like to automate it. After insta...
There are other Installation related questions with similar details here and here . I could not find anything related to listing the installed features in sql server 2016 though. I can do it manually looking at the logs - as per the info below, but I would like to automate it. After installing SQL Server 2016 I get a log file - located somewhere in these folders: C:\Program Files\Microsoft SQL Server\130\Setup Bootstrap\Log\20200109_205540 enter image description here A file which the name starts with Summary_ +my_server_name in that file, amongst other things I can find a list of the Sql Server Features installed: enter image description here After installation, later on when the systems are already in use, logins and permissions applied, firewall fixed, etc. Is there a simple way to get hold of the list of features installed on a sql server instance?
Marcello Miorelli (17274 rep)
Jan 10, 2020, 06:50 PM • Last activity: Feb 28, 2025, 05:14 PM
0 votes
2 answers
116 views
How to avoid writing to the error log when changing instance configuration?
If you look at https://dba.stackexchange.com/q/150863 there is a way to save SQL Server settings before enabling them, so you can disable them if needs be. Here is the code (partially copied): ``` IF OBJECT_ID('tempdb.dbo.#Settings') IS NOT NULL DROP TABLE #Settings; CREATE TABLE #Settings ( Setting...
If you look at https://dba.stackexchange.com/q/150863 there is a way to save SQL Server settings before enabling them, so you can disable them if needs be. Here is the code (partially copied):
IF OBJECT_ID('tempdb.dbo.#Settings') IS NOT NULL
    DROP TABLE #Settings;

CREATE TABLE #Settings
(
    Setting VARCHAR(100),
    Val INT
)
INSERT #Settings (Setting, Val)
SELECT 'show advanced options', cast(value_in_use as int) from sys.configurations where name = 'show advanced options'
UNION
SELECT 'xp_cmdshell', cast(value_in_use as int) from sys.configurations where name = 'xp_cmdshell'
UNION
SELECT 'Ad Hoc Distributed Queries', cast(value_in_use as int) from sys.configurations where name = 'Ad Hoc Distributed Queries'

SELECT * FROM #Settings;
That works fine, if you run one procedure at a time; however, it writes to the SQL Server error log as you can see below: enter image description here Is there a way to avoid writing changes of settings to the error log?
Marcello Miorelli (17274 rep)
Jan 22, 2025, 10:28 AM • Last activity: Jan 22, 2025, 04:11 PM
0 votes
2 answers
1740 views
What to do when MySQL is not generating any logs at all on Debian?
The problem I'm having is that MySQL (5.5.46-0+deb7u1) is not generating log files. This on Debian Wheezy. The my.cfg file has been updated according to the required config for logging ALL queries (as per http://www.microhowto.info/howto/log_all_queries_to_a_mysql_server.html). It's not writing to t...
The problem I'm having is that MySQL (5.5.46-0+deb7u1) is not generating log files. This on Debian Wheezy. The my.cfg file has been updated according to the required config for logging ALL queries (as per http://www.microhowto.info/howto/log_all_queries_to_a_mysql_server.html) . It's not writing to the specified location, nor to the default location. Mysql errors are also not being logged, as far as I can tell. EDIT: my.cnf is here: http://pastie.org/private/q9vwgihslpenc94mxmyyiw EDIT: Here's the output of "show variables like '%warn%';": +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_warnings | 1 | | sql_warnings | OFF | | warning_count | 0 | +---------------+-------+
geoidesic (111 rep)
Nov 16, 2015, 04:54 PM • Last activity: Dec 25, 2024, 11:01 PM
0 votes
1 answers
143 views
MariaDB, where's the GLOBAL general_log file?
``` SHOW VARIABLES LIKE 'general_log%'; +------------------+------------------+ | Variable_name | Value | +------------------+------------------+ | general_log | ON | | general_log_file | ff3b5fbeb27b.log | +------------------+------------------+ ``` Where can I find this `ff3b5fbeb27b.log` file? Th...
SHOW VARIABLES LIKE 'general_log%';
+------------------+------------------+
| Variable_name    | Value            |
+------------------+------------------+
| general_log      | ON               |
| general_log_file | ff3b5fbeb27b.log |
+------------------+------------------+
Where can I find this ff3b5fbeb27b.log file? The https://dev.mysql.com/doc/refman/8.4/en/query-log.html doc doesn't mention any path it should reside, and my search tell me it's under /var/log/mysql/, yet not for my case:
]> system ls -l /var/log/mysql/
total 0
FTA, my mysql is from the lastest docker image: mariadb:10.5:
$ mysql -V
mysql  Ver 15.1 Distrib 10.5.25-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2

$ uname -rm
6.8.0-1010-azure x86_64
xpt (143 rep)
Aug 12, 2024, 05:21 PM • Last activity: Aug 12, 2024, 09:20 PM
0 votes
2 answers
1334 views
"user does not exist " when calling DBMS_LOGMNR.START_LOGMNR
I have a CDB user called "MYADMIN" and I'm trying to make it connect to log miner. -- enable calling admin username on CDB ALTER SESSION set "_ORACLE_SCRIPT"=true / -- create unique table space for admin CREATE TABLESPACE myadmints DATAFILE '/path/to/admints.dbf' SIZE 20M AUTOEXTEND ON / -- create a...
I have a CDB user called "MYADMIN" and I'm trying to make it connect to log miner. -- enable calling admin username on CDB ALTER SESSION set "_ORACLE_SCRIPT"=true / -- create unique table space for admin CREATE TABLESPACE myadmints DATAFILE '/path/to/admints.dbf' SIZE 20M AUTOEXTEND ON / -- create admin user on CDB CREATE USER myadmin IDENTIFIED BY P@ssw0rd DEFAULT TABLESPACE myadmints QUOTA UNLIMITED ON myadmints ACCOUNT UNLOCK / -- allow access to all PDBs to the admin user ALTER USER myadmin SET CONTAINER_DATA=ALL CONTAINER=CURRENT / -- grant needed permissions GRANT DBA to myadmin GRANT CREATE SESSION TO myadmin GRANT CREATE TABLE TO myadmin GRANT EXECUTE_CATALOG_ROLE TO myadmin GRANT EXECUTE ON DBMS_LOGMNR TO myadmin GRANT SELECT ON V_$DATABASE TO myadmin GRANT SELECT ON V_$LOGMNR_CONTENTS TO myadmin GRANT SELECT ON V_$ARCHIVED_LOG TO myadmin GRANT SELECT ON V_$LOG TO myadmin GRANT SELECT ON V_$LOGFILE TO myadmin GRANT RESOURCE, CONNECT TO myadmin I've selected one line from 'v$archived_log' and trying to load the file.
BEGIN
  DBMS_LOGMNR.ADD_LOGFILE(LogFileName=>'/path/to/archive/ARC000011_1061542581',Options=>DBMS_LOGMNR.new);
  DBMS_LOGMNR.START_LOGMNR(StartScn=>3083464, EndScn=>3388245, Options=>DBMS_LOGMNR.DICT_FROM_ONLINECATALOG+DBMS_LOGMNR.NO_ROW_ID_IN_STMT);
END;
I can run it from the sys as sysdba user, but when I run it from my "myadmin" user I'm getting: Error report - ORA-01435: user does not exist ORA-06512: at "SYS.DBMS_LOGMNR", line 72 ORA-06512: at line 3 01435. 00000 - "user does not exist" *Cause: *Action: The error is about the START_LOGMNR line when I remove it there is no error. Which privilege I'm missing?
SHR (886 rep)
Jan 17, 2021, 10:39 AM • Last activity: Jun 18, 2024, 02:38 PM
0 votes
1 answers
627 views
How to delete "SQL Server Agent" logs from "Log File Viewer"?
On "SQL Server 2019" on Windows 2019: 1. I have opened Microsoft SQL Server Management Studio. 2. I logged into database server. 3. I clicked on "SQL Server Agent" and clicked on "Error logs" and double click on "Current". 4. Log File Viewer program is displayed. On right site I see plenty of errors...
On "SQL Server 2019" on Windows 2019: 1. I have opened Microsoft SQL Server Management Studio. 2. I logged into database server. 3. I clicked on "SQL Server Agent" and clicked on "Error logs" and double click on "Current". 4. Log File Viewer program is displayed. On right site I see plenty of errors, but that are old errors not important anymore. How to clear/delete those errors to remove clutter?
folow (523 rep)
Feb 19, 2024, 09:32 AM • Last activity: Feb 20, 2024, 08:45 PM
0 votes
0 answers
267 views
Analyze MariaDB error logs to get information what were happened on server
Can you help me understand what was happen on MariaDB server. I wasn't able to connect to this server because there was 'Too many connections'. This is one of our slave in replication so non of main server changes were not applied on this replica. I had to kill (-9) MariaDB server. I saw in error lo...
Can you help me understand what was happen on MariaDB server. I wasn't able to connect to this server because there was 'Too many connections'. This is one of our slave in replication so non of main server changes were not applied on this replica. I had to kill (-9) MariaDB server. I saw in error logs:
BUFFER POOL AND MEMORY
----------------------
InnoDB: ###### Diagnostic info printed to the standard error stream
2023-06-28 13:15:46 139789791348480 [Warning] InnoDB: A long semaphore wait:
--Thread 139789684688640 has waited at trx0rseg.ic line 50 for 272.00 seconds the semaphore:
X-lock on RW-latch at 0x7f2887cd1098 created in file buf0buf.cc line 1471
a writer (thread id 0) has reserved it in mode  SX
number of readers 0, waiters flag 1, lock_word: 10000000
Last time read locked in file not yet reserved line 0
Last time write locked in file buf0flu.cc line 1236
2023-06-28 13:15:46 139789791348480 [Warning] InnoDB: A long semaphore wait:
--Thread 139789781440256 has waited at trx0undo.ic line 164 for 271.00 seconds the semaphore:
X-lock on RW-latch at 0x7f2887d195a8 created in file buf0buf.cc line 1471
a writer (thread id 0) has reserved it in mode  SX
number of readers 0, waiters flag 1, lock_word: 10000000
Last time read locked in file trx0undo.ic line 180
Last time write locked in file buf0flu.cc line 1236
2023-06-28 13:15:46 139789791348480 [Warning] InnoDB: A long semaphore wait:
--Thread 139789808133888 has waited at srv0srv.cc line 2096 for 250.00 seconds the semaphore:
X-lock on RW-latch at 0x7f28d1cbf440 created in file dict0dict.cc line 1107
a writer (thread id 139789701474048) has reserved it in mode  exclusive
number of readers 0, waiters flag 1, lock_word: 0
Last time read locked in file row0purge.cc line 853
Last time write locked in file dict0stats.cc line 2457
2023-06-28 13:15:46 139789791348480 [Warning] InnoDB: A long semaphore wait:
--Thread 139789701474048 has waited at trx0trx.cc line 1157 for 251.00 seconds the semaphore:
Mutex at 0x7f28d1cc7fd8, Mutex REDO_RSEG created trx0rseg.cc:168, lock var 2


2023-06-28 13:15:46 139789791348480 [Note] InnoDB: A semaphore wait:
--Thread 139789782349568 has waited at trx0rseg.ic line 50 for 176.00 seconds the semaphore:
X-lock on RW-latch at 0x7f2887cd13c8 created in file buf0buf.cc line 1471
a writer (thread id 0) has reserved it in mode  SX
number of readers 0, waiters flag 1, lock_word: 10000000
Last time read locked in file not yet reserved line 0
Last time write locked in file buf0flu.cc line 1236
2023-06-28 13:15:46 139789791348480 [Note] InnoDB: A semaphore wait:
--Thread 139789709866752 has waited at dict0dict.cc line 7209 for 12.00 seconds the semaphore:
Mutex at 0x7f28d1cbf4c0, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2

2023-06-28 13:15:46 139789791348480 [Note] InnoDB: A semaphore wait:
--Thread 139789684688640 has waited at trx0rseg.ic line 50 for 272.00 seconds the semaphore:
X-lock on RW-latch at 0x7f2887cd1098 created in file buf0buf.cc line 1471
a writer (thread id 0) has reserved it in mode  SX
number of readers 0, waiters flag 1, lock_word: 10000000
Last time read locked in file not yet reserved line 0
Last time write locked in file buf0flu.cc line 1236
2023-06-28 13:15:46 139789791348480 [Note] InnoDB: A semaphore wait:
--Thread 139789781440256 has waited at trx0undo.ic line 164 for 271.00 seconds the semaphore:
X-lock on RW-latch at 0x7f2887d195a8 created in file buf0buf.cc line 1471
a writer (thread id 0) has reserved it in mode  SX
number of readers 0, waiters flag 1, lock_word: 10000000
Last time read locked in file trx0undo.ic line 180
Last time write locked in file buf0flu.cc line 1236
2023-06-28 13:15:46 139789791348480 [Note] InnoDB: A semaphore wait:
--Thread 139789781743360 has waited at dict0dict.cc line 1160 for 169.00 seconds the semaphore:
Mutex at 0x7f28d1cbf4c0, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2

2023-06-28 13:15:46 139789791348480 [Note] InnoDB: A semaphore wait:
--Thread 139789808133888 has waited at srv0srv.cc line 2096 for 250.00 seconds the semaphore:
X-lock on RW-latch at 0x7f28d1cbf440 created in file dict0dict.cc line 1107
a writer (thread id 139789701474048) has reserved it in mode  exclusive
number of readers 0, waiters flag 1, lock_word: 0
Last time read locked in file row0purge.cc line 853
Last time write locked in file dict0stats.cc line 2457
2023-06-28 13:15:46 139789791348480 [Note] InnoDB: A semaphore wait:
--Thread 139789781137152 has waited at dict0load.cc line 2747 for 219.00 seconds the semaphore:
Mutex at 0x7f28d1cbf4c0, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2

2023-06-28 13:15:46 139789791348480 [Note] InnoDB: A semaphore wait:
--Thread 139789701474048 has waited at trx0trx.cc line 1157 for 251.00 seconds the semaphore:
Mutex at 0x7f28d1cc7fd8, Mutex REDO_RSEG created trx0rseg.cc:168, lock var 2

InnoDB: ###### Starts InnoDB Monitor for 30 secs to print diagnostic info:
InnoDB: Pending reads 0, writes 0
InnoDB: ###### Diagnostic info printed to the standard error stream
2023-06-28 13:16:17 139789791348480 [Warning] InnoDB: A long semaphore wait:
--Thread 139789684688640 has waited at trx0rseg.ic line 50 for 303.00 seconds the semaphore:
X-lock on RW-latch at 0x7f2887cd1098 created in file buf0buf.cc line 1471
a writer (thread id 0) has reserved it in mode  SX
number of readers 0, waiters flag 1, lock_word: 10000000
Last time read locked in file not yet reserved line 0
Last time write locked in file buf0flu.cc line 1236
2023-06-28 13:16:17 139789791348480 [Warning] InnoDB: A long semaphore wait:
--Thread 139789781440256 has waited at trx0undo.ic line 164 for 302.00 seconds the semaphore:
X-lock on RW-latch at 0x7f2887d195a8 created in file buf0buf.cc line 1471
a writer (thread id 0) has reserved it in mode  SX
number of readers 0, waiters flag 1, lock_word: 10000000
Last time read locked in file trx0undo.ic line 180
Last time write locked in file buf0flu.cc line 1236
2023-06-28 13:16:17 139789791348480 [Warning] InnoDB: A long semaphore wait:
--Thread 139789808133888 has waited at srv0srv.cc line 2096 for 281.00 seconds the semaphore:
X-lock on RW-latch at 0x7f28d1cbf440 created in file dict0dict.cc line 1107
a writer (thread id 139789701474048) has reserved it in mode  exclusive
number of readers 0, waiters flag 1, lock_word: 0
Last time read locked in file row0purge.cc line 853
Last time write locked in file dict0stats.cc line 2457
2023-06-28 13:16:17 139789791348480 [Warning] InnoDB: A long semaphore wait:
--Thread 139789781137152 has waited at dict0load.cc line 2747 for 250.00 seconds the semaphore:
Mutex at 0x7f28d1cbf4c0, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2

2023-06-28 13:16:17 139789791348480 [Warning] InnoDB: A long semaphore wait:
--Thread 139789701474048 has waited at trx0trx.cc line 1157 for 282.00 seconds the semaphore:
Mutex at 0x7f28d1cc7fd8, Mutex REDO_RSEG created trx0rseg.cc:168, lock var 2

2023-06-28 13:16:17 139789791348480 [Note] InnoDB: A semaphore wait:
--Thread 139789782349568 has waited at trx0rseg.ic line 50 for 207.00 seconds the semaphore:
X-lock on RW-latch at 0x7f2887cd13c8 created in file buf0buf.cc line 1471
a writer (thread id 0) has reserved it in mode  SX
number of readers 0, waiters flag 1, lock_word: 10000000
Last time read locked in file not yet reserved line 0
Last time write locked in file buf0flu.cc line 1236
2023-06-28 13:16:17 139789791348480 [Note] InnoDB: A semaphore wait:
--Thread 139789709866752 has waited at dict0dict.cc line 7209 for 43.00 seconds the semaphore:
Mutex at 0x7f28d1cbf4c0, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2

2023-06-28 13:16:17 139789791348480 [Note] InnoDB: A semaphore wait:
--Thread 139789684688640 has waited at trx0rseg.ic line 50 for 303.00 seconds the semaphore:
X-lock on RW-latch at 0x7f2887cd1098 created in file buf0buf.cc line 1471
a writer (thread id 0) has reserved it in mode  SX
number of readers 0, waiters flag 1, lock_word: 10000000
Last time read locked in file not yet reserved line 0
Last time write locked in file buf0flu.cc line 1236
2023-06-28 13:16:17 139789791348480 [Note] InnoDB: A semaphore wait:
--Thread 139789781440256 has waited at trx0undo.ic line 164 for 302.00 seconds the semaphore:
X-lock on RW-latch at 0x7f2887d195a8 created in file buf0buf.cc line 1471
a writer (thread id 0) has reserved it in mode  SX
number of readers 0, waiters flag 1, lock_word: 10000000
Last time read locked in file trx0undo.ic line 180
Last time write locked in file buf0flu.cc line 1236
2023-06-28 13:16:17 139789791348480 [Note] InnoDB: A semaphore wait:
--Thread 139789781743360 has waited at dict0dict.cc line 1160 for 200.00 seconds the semaphore:
Mutex at 0x7f28d1cbf4c0, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2

2023-06-28 13:16:17 139789791348480 [Note] InnoDB: A semaphore wait:
--Thread 139789808133888 has waited at srv0srv.cc line 2096 for 281.00 seconds the semaphore:
X-lock on RW-latch at 0x7f28d1cbf440 created in file dict0dict.cc line 1107
a writer (thread id 139789701474048) has reserved it in mode  exclusive
number of readers 0, waiters flag 1, lock_word: 0
Last time read locked in file row0purge.cc line 853
Last time write locked in file dict0stats.cc line 2457
2023-06-28 13:16:17 139789791348480 [Note] InnoDB: A semaphore wait:
--Thread 139789781137152 has waited at dict0load.cc line 2747 for 250.00 seconds the semaphore:
Mutex at 0x7f28d1cbf4c0, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2

2023-06-28 13:16:17 139789791348480 [Note] InnoDB: A semaphore wait:
--Thread 139789701474048 has waited at trx0trx.cc line 1157 for 282.00 seconds the semaphore:
Mutex at 0x7f28d1cc7fd8, Mutex REDO_RSEG created trx0rseg.cc:168, lock var 2
Can you tell me what was happened?
Ela (500 rep)
Jun 29, 2023, 09:54 AM
0 votes
1 answers
322 views
How is the current sequence number in the control file used during recovery?
During recovery, how is the current sequence number used to determine the redo log files used to re-construct the database as well as the order in which to apply the changes recorded in them?
During recovery, how is the current sequence number used to determine the redo log files used to re-construct the database as well as the order in which to apply the changes recorded in them?
Mehdi Charife (131 rep)
May 9, 2023, 09:00 PM • Last activity: May 13, 2023, 04:53 AM
0 votes
2 answers
1114 views
What's inside a redo record in Oracle's Redo log?
While reading the [Redo log wiki page][1], I was confronted with the following statement: > For example, if a user `UPDATE`s a salary-value in a table containing employee-related data, the DBMS generates a redo record containing change-vectors that describe changes to the data segment block for the...
While reading the Redo log wiki page , I was confronted with the following statement: > For example, if a user UPDATEs a salary-value in a table containing employee-related data, the DBMS generates a redo record containing change-vectors that describe changes to the data segment block for the table. And if the user then COMMITs the update, Oracle generates another redo record and assigns the change a "system change number" (SCN). In what way do change-vectors describe changes to the data? Do they contain a copy of the old and the updated data, or just the SQL statements? I also don't understand the difference between the first generated redo record and the one generated after the update is commited.
Mehdi Charife (131 rep)
May 6, 2023, 02:10 PM • Last activity: May 8, 2023, 08:09 AM
4 votes
1 answers
5875 views
MySQL 8.0.30+: redo_log_capacity vs log_file_size?
Cheers. We're trying to track down the cause of a significant increase in IO load after switching from MySQL 5.7 to 8.0.30, and one issue we've noticed is related to this part of [documentation][1]: > The innodb_redo_log_capacity variable supersedes the innodb_log_files_in_group and innodb_log_file_...
Cheers. We're trying to track down the cause of a significant increase in IO load after switching from MySQL 5.7 to 8.0.30, and one issue we've noticed is related to this part of documentation : > The innodb_redo_log_capacity variable supersedes the innodb_log_files_in_group and innodb_log_file_size variables, which are deprecated. When the innodb_redo_log_capacity setting is defined, the innodb_log_files_in_group and innodb_log_file_size settings are ignored; otherwise, these settings are used to compute the innodb_redo_log_capacity setting (innodb_log_files_in_group * innodb_log_file_size = innodb_redo_log_capacity). If none of those variables are set, redo log capacity is set to the innodb_redo_log_capacity default value, which is 104857600 bytes (100MB). The maximum redo log capacity is 128GB. We used to set innodb_log_file_size to 4G or such, and leave the innodb_log_files_in_group undefined in the ini file, relying on the built-in default of 2. With MySQL 5.7, this has produced two 4GiB-sized log files for a total capacity of 8 GiB. Under MySQL 8, the calculation from legacy variables, as described by the documentation, just doesn't work - even when I explicitly set the files-in-group var, the redo-log-capacity is always set to 100 MiB (as shown by a SHOW VARIABLES ... query). This matches the size of the redo log folder (observed in OS) exactly. (And no, I don't have innodb_redo_log_capacity set elsewhere in the ini file.) Only after I set the new variable, innodb_redo_log_cpaacity in the ini file is the new value set in the running server's global variables. Am I missing something, or is this broken? FWIW, I'm on windows server 2019, but the ini file is unix-formatted. The values defined in the ini file for the legacy variables are correctly read by the server (tested with non-default values).
myxal (43 rep)
Feb 17, 2023, 03:51 PM • Last activity: Feb 17, 2023, 05:56 PM
0 votes
1 answers
360 views
Cassandra has stopped producing log files
I've recently taken up the role of managing a Cassandra Cluster that has been running in production for a few years. This is my first time working with Cassandra so I would appreciate any insights. It is a 3 node cluster with 100% replication across each node. The nodes each have a load of around 5...
I've recently taken up the role of managing a Cassandra Cluster that has been running in production for a few years. This is my first time working with Cassandra so I would appreciate any insights. It is a 3 node cluster with 100% replication across each node. The nodes each have a load of around 5 TB. The config is not ideal but they are working fine for now. However **whenever there is an issue it is hard to troubleshoot since I noticed log files have stopped updating a few years back.** I have made sure the logging directory was not changed, and all the logging levels are at default. Memory and disk are still at reasonable usage levels. I noticed each of the logs have reached 10 zip files and stopped producing anything. Other than that I do not see the issue. I would appreciate the help. Thanks
Faisal Mushayt (3 rep)
Feb 12, 2023, 08:55 AM • Last activity: Feb 13, 2023, 02:54 AM
117 votes
5 answers
243098 views
How to safely change MySQL innodb variable 'innodb_log_file_size'?
So I'm fairly new to tuning InnoDB. I'm slowly changing tables (where necessary) from MyIsam to InnoDB. I've got about 100MB in innodb, so I increased the `innodb_buffer_pool_size` variable to 128MB: mysql> show variables like 'innodb_buffer%'; +-------------------------+-----------+ | Variable_name...
So I'm fairly new to tuning InnoDB. I'm slowly changing tables (where necessary) from MyIsam to InnoDB. I've got about 100MB in innodb, so I increased the innodb_buffer_pool_size variable to 128MB: mysql> show variables like 'innodb_buffer%'; +-------------------------+-----------+ | Variable_name | Value | +-------------------------+-----------+ | innodb_buffer_pool_size | 134217728 | +-------------------------+-----------+ 1 row in set (0.00 sec) When I went to change the innodb_log_file_size value (example my.cnf on mysql's innodb configuration page comments to change the log file size to 25% of the buffer size. So now my my.cnf looks like this: # innodb innodb_buffer_pool_size = 128M innodb_log_file_size = 32M When I restart the server, I get this error: > 110216 9:48:41 InnoDB: Initializing buffer pool, size = 128.0M 110216 9:48:41 InnoDB: Completed initialization of buffer pool InnoDB: Error: log file ./ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 33554432 bytes! 110216 9:48:41 [ERROR] Plugin 'InnoDB' init function returned error. 110216 9:48:41 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. So my question: Is it safe to delete the old log_files, or is there another method to change the innodb_log_file_size variable?
Derek Downey (23568 rep)
Feb 16, 2011, 03:01 PM • Last activity: Feb 9, 2023, 12:33 AM
3 votes
1 answers
484 views
PostgreSQL time series table with composite partitions
I am investigating how I might structure a PostgreSQL table to store a large amount of time stamped data that also needs to be portioned by another field. My expected data structure will be something like: ``` CREATE TABLE event ( event_time timestamp not null, object_sha char(64) not null, ; sha256...
I am investigating how I might structure a PostgreSQL table to store a large amount of time stamped data that also needs to be portioned by another field. My expected data structure will be something like:
CREATE TABLE event (
    event_time	     timestamp not null,
    object_sha       char(64) not null,	      ; sha256 as hex digits
    username         text not null,		      ; actual name not a foreign key
    payload          jsonb not null		      ; many other fields, not indexed
);
Events will be written into the table at a fast rate, possibly as high as 1000 per second, stored for around 6 months, and exported to cheaper storage. (perhaps pgdump files to an S3 bucket). So it would make sense to use declarative portioning on the event time, using pg_partman to create and manage new partition tables each week or so. However, there is a strong requirement to run queries on the data by the object_sha, and my concern is that if the data is portioned only by timestamp, then the most recent partition table will be under heavy I/O load and might not keep up, so it appears logical to me that the data should also be partitioned by the prefix on object_sha, (perhaps on the first one or two hex digits), as that way reads and writes will be evenly distributed across many tables. My questions are: 1. The pg_partman documentation says [Sub-partitioning with multiple levels is supported, but it is of very limited use in PostgreSQL and provides next to NO PERFORMANCE BENEFIT](https://github.com/pgpartman/pg_partman/blob/master/doc/pg_partman.md#sub-partitioning) – Why is that? Is that advice up to date and correct, as I can’t see how it would be a bad idea. 2. All the queries that I plan to run will always have order by event_time desc limit 50 or suchlike included and most of the time will be satisfied by recent events in one or two recent tables. Is the PostgreSQL query engine smart enough to limit the query to the most recent table, and only look in older tables if it does not find enough results? 3. Given that my front end application will know more about the data and query that postgres. Would it make sense to write to, or query the partition tables directly instead of sending insert and query statements to the top level table? Please note that I am aware that PostgreSQL might not be the best way to solve this data storage problem, and that NoSQL alternatives exist. I have been tasked with investigating using a relational database, my team mates are looking into other technologies as well.
chrestomanci (45 rep)
Jan 13, 2023, 10:11 AM • Last activity: Jan 13, 2023, 05:15 PM
Showing page 1 of 20 total questions