Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
1374
views
mysql (mariadb) won't start after power outage (MacOS)
/var/log/system.log says (over and over) ``` Oct 13 19:34:01 Data-Server com.apple.xpc.launchd[1] (com.mariadb.server[2128]): Service exited with abnormal code: 1 Oct 13 19:34:01 Data-Server com.apple.xpc.launchd[1] (com.mariadb.server): Service only ran for 0 seconds. Pushing respawn out by 10 seco...
/var/log/system.log says (over and over)
Oct 13 19:34:01 Data-Server com.apple.xpc.launchd (com.mariadb.server): Service exited with abnormal code: 1
Oct 13 19:34:01 Data-Server com.apple.xpc.launchd (com.mariadb.server): Service only ran for 0 seconds. Pushing respawn out by 10 seconds.
/usr/local/var/mysql/Data-Server.local.err says (once recently, repeated a number of times well before the crash)
2020-10-13 2:44:25 20019181 [Warning] Aborted connection 20019181 to db: 'EcoReality' user: 'root' host: '10.1.2.2' (Got timeout reading communication packets)
First thing I did was to shutdown the launchctl entry, to keep it from constantly restarting.
# launchctl unload /Library/LaunchDaemons/com.mariadb.server.plist
Then I tried invoking mysqld manually:
# sudo /usr/local/bin/mysqld -u mysql
2020-10-13 20:46:09 0 [Note] /usr/local/bin/mysqld (mysqld 10.4.6-MariaDB) starting as process 2364 ...
2020-10-13 20:46:09 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-10-13 20:46:09 0 [Note] InnoDB: Uses event mutexes
2020-10-13 20:46:09 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2020-10-13 20:46:09 0 [Note] InnoDB: Number of pools: 1
2020-10-13 20:46:09 0 [Note] InnoDB: Using SSE2 crc32 instructions
2020-10-13 20:46:09 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2020-10-13 20:46:09 0 [Note] InnoDB: Completed initialization of buffer pool
2020-10-13 20:46:09 0 [ERROR] InnoDB: Invalid log block checksum. block: 81635496 checkpoint no: 2609153 expected: 296846624 found: 3735928559
2020-10-13 20:46:09 0 [ERROR] InnoDB: Missing MLOG_CHECKPOINT at 41797373564 between the checkpoint 41797373564 and the end 41797373440.
2020-10-13 20:46:09 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
2020-10-13 20:46:09 0 [Note] InnoDB: Starting shutdown...
2020-10-13 20:46:09 0 [ERROR] Plugin 'InnoDB' init function returned error.
2020-10-13 20:46:09 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2020-10-13 20:46:09 0 [Note] Plugin 'FEEDBACK' is disabled.
2020-10-13 20:46:09 0 [Note] CONNECT: Version 1.06.0009 January 27, 2019
2020-10-13 20:46:09 0 [ERROR] Unknown/unsupported storage engine: InnoDB
2020-10-13 20:46:09 0 [ERROR] Aborting
So now I'm a bit stumped at the lack of diagnostic messages. Is there any way to coax more info out of mysqld when it goes down? Or should I just start incrementing innodb_force_recovery
until something interesting happens?
Jan Steinman
(191 rep)
Oct 14, 2020, 04:01 AM
• Last activity: Aug 5, 2025, 07:01 PM
1
votes
1
answers
175
views
Cassandra System.Log shows nodtool tpstats output
We have a Docker Cassandra container running 6.8.21. At times when the container is restarted the the Logs show the usual startup activities and all just works well post that. But on a few occasions we observed that an output similar to that of nodetool tpstats keeps getting printed over and over ag...
We have a Docker Cassandra container running 6.8.21. At times when the container is restarted the the Logs show the usual startup activities and all just works well post that.
But on a few occasions we observed that an output similar to that of nodetool tpstats keeps getting printed over and over again. Posting a sample of the output below.
Need help:
1. Why is Cassandra Printing this? Is there some bottleneck that it has hit?
2. What does this log trying to tell?
3. What should be the next steps in our investigation?
Appreciate any help here.
Log:
INFO [OptionalTasks:1] 2022-08-11 01:33:55,678 StatusLogger.java:174 -
Pool Name Active Pending Backpressure Delayed Shared Stolen Completed Blocked All Time Blocked
AuthzStage 0 0 N/A N/A N/A N/A 0 0 0
LwtStage 0 0 N/A N/A N/A N/A 0 0 0
RequestResponseStage 0 0 N/A N/A N/A N/A 0 0 0
AntiCompactionExecutor 0 0 N/A N/A N/A N/A 0 0 0
AntiEntropyStage 0 0 N/A N/A N/A N/A 0 0 0
BackgroundIoStage 0 0 N/A N/A N/A N/A 0 0 0
CacheCleanupExecutor 0 0 N/A N/A N/A N/A 0 0 0
CompactionExecutor 0 0 N/A N/A N/A N/A 77 0 0
GossipStage 0 0 N/A N/A N/A N/A 0 0 0
HintsDispatcher 0 0 N/A N/A N/A N/A 0 0 0
InternalResponseStage 0 0 N/A N/A N/A N/A 0 0 0
MemtableFlushWriter 0 0 N/A N/A N/A N/A 78 0 0
MemtablePostFlush 0 0 N/A N/A N/A N/A 203 0 0
MemtableReclaimMemory 0 0 N/A N/A N/A N/A 78 0 0
MigrationStage 0 0 N/A N/A N/A N/A 0 0 0
MiscStage 0 0 N/A N/A N/A N/A 0 0 0
PendingRangeCalculator 0 0 N/A N/A N/A N/A 1 0 0
PerDiskMemtableFlushWriter_0 0 0 N/A N/A N/A N/A 39 0 0
Repair-Task 0 0 N/A N/A N/A N/A 0 0 0
Sampler 0 0 N/A N/A N/A N/A 0 0 0
SecondaryIndexManagement 0 0 N/A N/A N/A N/A 0 0 0
TPC/all N/A N/A N/A N/A N/A N/A N/A N/A N/A
TPC/all/AUTHENTICATION 0 N/A N/A N/A 0 0 8 N/A N/A
TPC/all/AUTHORIZATION 0 N/A N/A N/A 0 0 9 N/A N/A
TPC/all/BACKPRESSURED_MESSAGE_DECODE 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/BACKPRESSURE_RESCHEDULE 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/BACKUP_OPERATION 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/BATCH_REMOVE 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/BATCH_REPLAY 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/BATCH_STORE 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/BATCH_STORE_RESPONSE 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/BLOCKING_OPERATION_RESTART 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/BUFFER_POOL_CLEANUP 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/CLEAN_BACKUPS_OPERATION 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/CONTINUOUS_PAGING 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/COUNTER_ACQUIRE_LOCK 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/EXECUTE_STATEMENT 0 N/A N/A N/A 0 0 38 N/A N/A
TPC/all/FRAME_DECODE 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/HINT_DISPATCH 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/HINT_RESPONSE 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/INCREMENTAL_NODESYNC_MAINTENANCE 0 N/A N/A N/A 0 0 1271 N/A N/A
TPC/all/LWT_COMMIT 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/LWT_PREPARE 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/LWT_PREPARE_AWAIT_LOCK 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/LWT_PROPOSE 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/LWT_PROPOSE_AWAIT_LOCK 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/NETWORK_BACKPRESSURE 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/NODESYNC_VALIDATION 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/OUTBOUND_FLUSH 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/OUTBOUND_SCHEDULED_FLUSH 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/PARALLEL_MESSAGE_DECODE 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/POPULATE_VIRTUAL_TABLE 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/READ_DISK_ASYNC 0 N/A N/A N/A 0 0 36 N/A N/A
TPC/all/READ_DISK_RETRY 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/READ_DISK_WOULD_BLOCK 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/READ_INTERNAL 0 N/A N/A N/A 0 0 13 N/A N/A
TPC/all/READ_LOCAL 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/READ_RANGE_INTERNAL 0 N/A N/A N/A 0 0 22 N/A N/A
TPC/all/READ_RANGE_LOCAL 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/READ_RANGE_NODESYNC 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/READ_RANGE_REMOTE 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/READ_RANGE_RESPONSE 0 N/A N/A N/A 0 0 22 N/A N/A
TPC/all/READ_REMOTE 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/READ_RESPONSE 0 N/A N/A N/A 0 0 13 N/A N/A
TPC/all/READ_SECONDARY_INDEX 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/READ_SPECULATE 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/READ_SWITCH_FOR_ITERATOR 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/READ_TIMEOUT 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/RESTORE_OPERATION 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/SAVE_LOCAL_NODE 0 N/A N/A N/A 0 0 7 N/A N/A
TPC/all/SAVE_PEER_NODES 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/SOLR_ACQUIRE_LOCK 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/SOLR_INDEXING 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/SYSTEM_KEYSPACE_ACQUIRE_LOCK 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/TIMED_TIMEOUT 0 N/A N/A N/A 0 0 2331 N/A N/A
TPC/all/TIMED_UNKNOWN 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/TRUNCATE 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/UNKNOWN 0 N/A N/A N/A 0 0 49 N/A N/A
TPC/all/UPDATE_NOTIFICATION 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/VIEW_ACQUIRE_LOCK 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/WRITE_AWAIT_COMMITLOG_SEGMENT 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/WRITE_AWAIT_COMMITLOG_SYNC 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/WRITE_DEFRAGMENT 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/WRITE_INTERNAL 0 N/A N/A N/A 0 0 3 N/A N/A
TPC/all/WRITE_LOCAL 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/WRITE_MEMTABLE_FULL 0 N/A N/A N/A 0 0 0 N/A N/A
TPC/all/WRITE_REMOTE 0 0 N/A N/A 0 0 0 N/A 0
TPC/all/WRITE_RESPONSE 0 N/A N/A N/A 0 0 3 N/A N/A
TPC/all/WRITE_SWITCH_FOR_MEMTABLE 0 N/A N/A N/A 0 0 372 N/A N/A
ValidationExecutor 0 0 N/A N/A N/A N/A 0 0 0
ViewBuildExecutor 0 0 N/A N/A N/A N/A 0 0 0
CompactionManager 0 0
MessagingService n/a 0/0
Buffer pool size for cached reads: 4.000MiB, for direct reads: 4.000MiB
Cached reads buffer pool BufferPool for long lived buffers: Size-tiered from 4.000KiB to 4.000KiB buffers, using 1024 buffers per slab.
Sub pools:
Buffer size 4.000KiB: 584.000KiB used, 4.000MiB allocated, 1 slabs.
Overflow: 0.000KiB
Direct reads buffer pool BufferPool for temporary buffers: allocated 4.000MiB, used 0.000KiB, overflow 0.000KiB, overflow allocations mean rate 0.000000
Global memtable buffer pool size: onHeap = 3.402KiB, offHeap = 2.029KiB
Cache Type Size Capacity KeysToSave
KeyCache 0 104857600 all
RowCache 0 0 all
Table Memtable ops,data
system_schema.keyspaces 0,0
system_schema.hidden_columns 0,0
system_schema.edges 0,0
system_schema.views 0,0
system_schema.columns 0,0
system_schema.types 0,0
system_schema.indexes 0,0
system_schema.vertices 0,0
system_schema.dropped_columns 0,0
system_schema.aggregates 0,0
system_schema.triggers 0,0
system_schema.tables 0,0
system_schema.functions 0,0
system_auth.roles 2,262
system_auth.role_members 0,0
system_auth.role_permissions 1,199
dse_system_local.solr_resources 0,0
dse_system.encrypted_keys 0,0
dse_leases.logs 0,0
dse_leases.leases 0,0
solr_admin.solr_resources 0,0
dse_insights.insights_config 0,0
dse_insights.tokens 0,0
dse_insights_local.insights_config 0,0
system_distributed.nodesync_status 0,0
system_distributed.nodesync_user_validations 0,0
system_distributed.parent_repair_history 0,0
system_distributed.repair_history 0,0
system_distributed.restore_task_state 0,0
system_distributed.backups 0,0
system_distributed.backup_task_state 0,0
system_distributed.view_build_status 0,0
system_backups.backup_stores 0,0
system_backups.backup_configurations 0,0
dse_perf.node_slow_log 0,0
system_traces.sessions 0,0
system_traces.events 0,0
dse_security.digest_tokens 0,0
dse_security.role_options 0,0
dse_security.spark_security 0,0
axone.config_context 0,0
axone.test_hc 0,0
axone.config 0,0
axone.valid_context 0,0
system.compaction_history 1,254
system.IndexInfo 0,0
system.repairs 0,0
system.size_estimates 0,0
system.paxos 0,0
system.built_views 0,0
system.peer_events 0,0
system.range_xfers 0,0
system.peers 0,0
system.batches 0,0
system.transferred_ranges 0,0
system.view_builds_in_progress 0,0
system.local 0,0
system.sstable_activity 4,0
system.nodesync_checkpoints 0,0
system.available_ranges 0,0
system.prepared_statements 5,655
Sakiv
(11 rep)
Aug 21, 2022, 07:06 PM
• Last activity: Jul 15, 2025, 05:04 AM
2
votes
1
answers
2021
views
Out of memory in MySQL error log
What should I check to solve this error that appears in the error log? Do I need to add RAM to the server or adjust some configuration variable? I copy part of the log: > 180102 10:45:53 InnoDB: Started; log sequence number 0 267632652 > mysql3: Out of memory (Needed 2299002880 bytes) > > 180102 10:...
What should I check to solve this error that appears in the error log?
Do I need to add RAM to the server or adjust some configuration variable?
I copy part of the log:
> 180102 10:45:53 InnoDB: Started; log sequence number 0 267632652
> mysql3: Out of memory (Needed 2299002880 bytes)
>
> 180102 10:45:54 [Note] mysql3: ready for connections. Version:
> '5.0.27-community-log' socket: '' port: 3307 MySQL Community
> Edition (GPL)
>
> 180102 10:46:14180102 10:46:14 [ERROR] Cannot find table
> test/cliconcepw from the internal data dictionary of InnoDB though the
> .frm file for the table exists. Maybe you have deleted and recreated
> InnoDB data files but have forgotten to delete the corresponding .frm
> files of InnoDB tables, or you have moved .frm files to another
> database?
>
> See http://dev.mysql.com/doc/refman/5.0/en/innodb-troubleshooting.html
> how you can resolve the problem.
>
> 180215 20:15:05 [ERROR] Out of memory; check if mysqld or some other
> process uses all available memory; if not, you may have to use
> 'ulimit' to allow mysqld to use more memory or you can add more swap
> space
-------------
Windows Server 2008 64 bits / 8 GB RAM
MySQL 5.0
-------------
Gaston
(21 rep)
Feb 16, 2018, 09:15 PM
• Last activity: Jun 24, 2025, 04:06 AM
0
votes
1
answers
628
views
mssql 2019 alwayson cluster backup log chain is broken
now i had one mssql 2019 alwayson cluster.i backup database on master node using monthly fullbackup. weekly differential backup. daily log backup. but unfortunately then backup log chain always broken include log backup and differential bakup .i check the log chain like below:[

王文峰
(1 rep)
Jul 14, 2022, 02:50 AM
• Last activity: Jun 7, 2025, 08:05 AM
1
votes
1
answers
242
views
Is there a way to hide mysql innodb cluster queries from general_log?
When I start my innodb cluster I get many lines of cluster related logs in the mysql general_log file like below, how can I hide these from the log? ``` 2020-06-26T14:40:43.719029Z 31774 Connect mysql_router1_6dv772yz7qu1@dbsg1 on using SSL/TLS 2020-06-26T14:40:43.719494Z 31774 Query SET @@SESSION.a...
When I start my innodb cluster I get many lines of cluster related logs in the mysql general_log file like below, how can I hide these from the log?
2020-06-26T14:40:43.719029Z 31774 Connect mysql_router1_6dv772yz7qu1@dbsg1 on using SSL/TLS
2020-06-26T14:40:43.719494Z 31774 Query SET @@SESSION.autocommit=1, @@SESSION.character_set_client=utf8, @@SESSION.character_set_re
sults=utf8, @@SESSION.character_set_connection=utf8, @@SESSION.sql_mode='ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION'
2020-06-26T14:40:43.719659Z 31773 Quit
2020-06-26T14:40:43.720231Z 31774 Query SET @@SESSION.group_replication_consistency='EVENTUAL'
2020-06-26T14:40:43.720509Z 31774 Query START TRANSACTION
2020-06-26T14:40:43.720740Z 31774 Query SELECT * FROM mysql_innodb_cluster_metadata.schema_version
2020-06-26T14:40:43.721185Z 31774 Query select cluster_type from mysql_innodb_cluster_metadata.v2_this_instance
2020-06-26T14:40:43.723301Z 31774 Query select I.mysql_server_uuid, I.endpoint, I.xendpoint from mysql_innodb_cluster_metadata.v2_instances I join mysql_innodb_cluster_metadata.v2_gr_clusters C on I.cluster_id = C.cluster_id where C.cluster_name = 'prodcluster' AND C.group_name = '8a70e310-a997-4d59-8e81-92dab9daa5be'
2020-06-26T14:40:43.723985Z 31774 Query COMMIT
2020-06-26T14:40:43.724317Z 31774 Query show status like 'group_replication_primary_member'
2020-06-26T14:40:43.725621Z 31774 Query SELECT member_id, member_host, member_port, member_state, @@group_replication_single_primary_mode FROM performance_schema.replication_group_members WHERE channel_name = 'group_replication_applier'
2020-06-26T14:40:43.793743Z 31771 Query START TRANSACTION
2020-06-26T14:40:43.933131Z 31770 Query SELECT * FROM mysql_innodb_cluster_metadata.schema_version
2020-06-26T14:40:44.032471Z 31771 Query SELECT * FROM mysql_innodb_cluster_metadata.schema_version
2020-06-26T14:40:44.171571Z 31770 Query select cluster_type from mysql_innodb_cluster_metadata.v2_this_instance
2020-06-26T14:40:44.226921Z 31774 Quit
Ken
(21 rep)
Jun 26, 2020, 02:45 PM
• Last activity: Jun 5, 2025, 12:04 AM
5
votes
3
answers
1078
views
Limiting PostgreSQL logging
I have a table that can contain reasonably large `BYTEA` values (the same could apply to large `TEXT` values). It looks like this: CREATE TABLE testtable ( id SERIAL PRIMARY KEY, name TEXT UNIQUE NOT NULL, -- This is just an example: this could be the PK value BYTEA -- This could be TEXT ); If the a...
I have a table that can contain reasonably large
BYTEA
values (the same could apply to large TEXT
values). It looks like this:
CREATE TABLE testtable (
id SERIAL PRIMARY KEY,
name TEXT UNIQUE NOT NULL, -- This is just an example: this could be the PK
value BYTEA -- This could be TEXT
);
If the application using this table tries to insert two rows using the same name
, I get this error in the logs (this is an expected error, of course):
ERROR: duplicate key value violates unique constraint "testtable_name_key"
STATEMENT: INSERT INTO testtable(name, value) VALUES ('a', ...something rather long...);
While logging the error is useful, as well as logging the statement (and possibly the value for "name" in this particular context), logging the long BYTEA
or TEXT
value isn't. In fact, binary data in the logs are escaped in text form (e.g. E'\\377\\327...'
), which effectively makes it take even more space than it would in binary form in the database. This can cause more problems than help when the value's size is tens of kB or more.
Is there a way to prevent these values from appearing in the logs when there is an error? I'm still interested in knowing there was an error (and preferably have some information about the statement). Of course, this is not just about this particular error and use case, but any error log that could potentially dump multiple kilobytes of uninteresting data in the logs.
Otherwise, is there any way to truncate the "STATEMENT: INSERT INTO ...
" line? Failing that, would there be any way to keep the ERROR:
line but to remove the STATEMENT:
line altogether?
I've tried to set log_error_verbosity = terse
in postgresql.conf
, but this doesn't seem to change anything for this.
(I'm also using log_destination = 'syslog'
to log to rsyslog
, so a solution that limits each log "bundle" to a few lines there would be a reasonable compromise.)
I'm using PostgreSQL 8.4, but if solutions only present in newer versions would also be interesting.
Bruno
(1557 rep)
Nov 28, 2013, 01:17 AM
• Last activity: May 3, 2025, 12:14 PM
1
votes
4
answers
1760
views
PostgreSQL logging: Statements splits into multiple lines
In PostgreSQL v10 and above, I activated the logging of the statements using the extension pg_stat_statements. My configuration: logging_collector = on log_line_prefix = '%t [%p]: [%l-1] db=%d,user=%u,app=%a,client=%h ' log_destination = 'stderr,syslog' log_statement = all If I execute a simple quer...
In PostgreSQL v10 and above, I activated the logging of the statements using the extension pg_stat_statements.
My configuration:
logging_collector = on
log_line_prefix = '%t [%p]: [%l-1] db=%d,user=%u,app=%a,client=%h '
log_destination = 'stderr,syslog'
log_statement = all
If I execute a simple query:
postgres=# select current_timestamp;
In the log it will show the prefix and and the statement, something like this:
Line 1: (prefix) statement: select current_timestamp;
However, if I have a query that is split in multiple lines, for example:
select
current_timestamp;
In the log it will show 2 lines:
Line 1: (prefix) select
Line 2: current_timestamp;
This "current_timestamp" is isolated, doesn't have the prefix and there is no way I can match the line with the "select" part.
How can I configure so that multiple line statements are shown in a single line in the log?
Like this:
Statement:
select
current_timestamp;
Log:
Line 1: (prefix) statement: select current_timestamp;
I've test changing to csvlog and I got the same result.
Why would I need this? I configured the logs to be sent to a central database for auditing purpose. These logs are exposed via Kiabana dashboards. Several lines in the log for one statement, specially if the lines (except the first one) doesn't have the prefix, it is hard to find the full statement.
Thank you.
dcop7
(29 rep)
Mar 22, 2022, 02:47 PM
• Last activity: Apr 19, 2025, 08:06 PM
0
votes
1
answers
483
views
LSN numbers query regarding Differential backups
We have the following backup regime in place which I am not 100% sure about. - Full backup each night at 18:00 taking approx. 6 hours - Transaction log backup upon completion of full generally about 10 mins. - Differential backups each hour taking approx. 20 mins. When querying the msdb.backupset ta...
We have the following backup regime in place which I am not 100% sure about.
- Full backup each night at 18:00 taking approx. 6 hours
- Transaction log backup upon completion of full generally about 10 mins.
- Differential backups each hour taking approx. 20 mins.
When querying the msdb.backupset table, I understand that the first_lsn will only be reset by a transaction log backup, but am slightly confused that the last_lsn entry is incremented each time the differential backup runs.
This seems to imply that the differential backup also contains transaction log entries (which I don't believe it does).
Is this entry in msdb.backupset just to give information to a restore to which transaction log backup to use to preserve the chain of transactions ?
Finally, if we only have a transaction log backup once a night and differentials throughout the day, we are still looking at potentially a maximum of 24 hours data loss despite hourly differentials.
Apologies if this seems like a lot of questions but trying to get this clear in my head.
Thanks for listening.
zapcon
(73 rep)
Jun 1, 2017, 11:16 AM
• Last activity: Apr 3, 2025, 07:18 AM
15
votes
6
answers
21779
views
Disable Binary Logs in MySQL 8.0
How can I disable binary logging on MySQL 8.0? I am needing to import a large MySQLDump file, and don't want to fill the Binary logs with many GB of data. In the past I would have set `log_bin=0` in the `my.cnf` and restarted the database. But with MySQL 8.0 I have tried various settings, but everyt...
How can I disable binary logging on MySQL 8.0?
I am needing to import a large MySQLDump file, and don't want to fill the Binary logs with many GB of data. In the past I would have set
log_bin=0
in the my.cnf
and restarted the database.
But with MySQL 8.0 I have tried various settings, but everytime I log onto the database, and run:
SELECT @@global.log_bin;
and it keeps returning 1
I have tried:
log_bin = 0
log_bin = OFF
#log_bin = 1
Is it possible to disable to the binary logs?
IGGt
(2276 rep)
Jan 3, 2019, 04:49 PM
• Last activity: Mar 14, 2025, 03:56 PM
1
votes
5
answers
6110
views
PostgreSQL will stop logging when logrotate rotate postgresql log file
I have installed PostgreSQL 10.6 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit And configured: log_filename = 'postgresql.log' But I would like to manage log rotations by logrotate instead of internal logrotation by PostgreSQL: #vim /etc/logrotate.d/postg...
I have installed
PostgreSQL 10.6 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
And configured:
log_filename = 'postgresql.log'
But I would like to manage log rotations by logrotate instead of internal logrotation by PostgreSQL:
#vim /etc/logrotate.d/postgresql
/var/lib/pgsql/data/log/postgresql.log {
daily
rotate 7
compress
create 0664 postgres postgres
missingok
notifempty
sharedscripts
}
Where logrotate works as expected but PostgreSQL will stop logging into the rotated
/var/lib/pgsql/data/log/postgresql.log
file. The only resolution is to systemctl restart postgresql
which could be placed inside logrotate postrotate/endscript
block but I can't do it this way if this is done on critical production system.
Do you have any advice how to properly logrotate PostgreSQL log file without need of restarting PostgreSQL?
Thanks!
**UPDATE:**
During reloading of PostgreSQL it looks for changes. I found a workaround how to achieve that PostgreSQL will start to log into rotated *postgresql.log* file - I need to modify config file, for example comment out/modify *log_filename* then reload PostgreSQL and revert changes back and reload PostgreSQL again. Because PostgreSQL looks for changes in configuration file. But I would be glad if there is another better solution than touching configuration file.
MyKE
(121 rep)
Oct 23, 2019, 01:10 PM
• Last activity: Feb 25, 2025, 11:29 PM
1
votes
1
answers
912
views
PostgreSQL's PGP_SYM_ENCRYPT log security
In one of our project, I tried to use PostgreSQL's security extension `pgcrypto`. I found that is function, `PGP_SYM_ENCRYPT` is quite handy. Using this function enable me to query encrypted data with `LIKE %keyword%`, `BETWEEN` etc, which I can not achieve if I use programming language such as Java...
In one of our project, I tried to use PostgreSQL's security extension
pgcrypto
. I found that is function, PGP_SYM_ENCRYPT
is quite handy. Using this function enable me to query encrypted data with LIKE %keyword%
, BETWEEN
etc, which I can not achieve if I use programming language such as Java or PHP to encrypt the data then insert it into the database.
However, I found an issue. Once I execute queries with PGP_SYM_ENCRYPT
new line in the log file is written. Interestingly, the log is including the password in PLAIN!
Of course, IMO, this is unacceptable. I want to secure my password
, in the same time I also want to log any change (INSERT, UPDATE, DELETE) into my database.
Is there any way to **hide** this password information from the log file?
Amri Shodiq
(11 rep)
Feb 19, 2019, 06:23 AM
• Last activity: Feb 14, 2025, 03:07 PM
0
votes
2
answers
7210
views
How to upgrade MySQL from 8.0 to 8.4?
I'm on Windows. I installed MySQL 8.4, stopped all services and copied the Data folder and my.ini files into the 8.4 folder but now the 8.4 services does not start and does not produce any logs. I can't find an answer that doesn't involved either replacing the data (ie what I did) or replacing the b...
I'm on Windows. I installed MySQL 8.4, stopped all services and copied the Data folder and my.ini files into the 8.4 folder but now the 8.4 services does not start and does not produce any logs. I can't find an answer that doesn't involved either replacing the data (ie what I did) or replacing the binaries (which is what I did just the other way around). What is the correct way to do this?
John John
Jul 1, 2024, 10:17 AM
• Last activity: Jan 20, 2025, 02:03 PM
5
votes
3
answers
4694
views
MySQL general log
Is there away to have MySQL dump its log to a remote MySQL server? I would like to audit MySQL connections but I don't want the server to store all the information local on its own hardware.
Is there away to have MySQL dump its log to a remote MySQL server?
I would like to audit MySQL connections but I don't want the server to store all the information local on its own hardware.
MrNemus
(153 rep)
Feb 10, 2012, 11:54 PM
• Last activity: Jan 8, 2025, 07:08 AM
8
votes
2
answers
14890
views
PostgreSQL checkpoint log explained
I know what PostgreSQL checkpoint is and when it is happening. I need some additional information about the logs produced by the `log_checkpoints = on` parameter, so please explain some points of it to me: `2017-09-09 16:31:37 EEST [6428-6524] LOG: checkpoint complete: wrote 30057 buffers (22.9%); 0...
I know what PostgreSQL checkpoint is and when it is happening.
I need some additional information about the logs produced by the
log_checkpoints = on
parameter, so please explain some points of it to me:
2017-09-09 16:31:37 EEST [6428-6524] LOG: checkpoint complete: wrote 30057 buffers (22.9%); 0 transaction log file(s) added, 0 removed, 47 recycled; write=148.465 s, sync=34.339 s, total=182.814 s; sync files=159, longest=16.143 s, average=0.215 s
1. I know that 22.9% of shared buffers are written (I have 1024 MB shared_buffers
so that means 234 MB are written out).
2. I know that 47 WAL files are recycled, i.e., they are not needed anymore for crash recovery, because the real data from them is already on disk.
**Question A**. But what about write=148.465 s
and sync=34.339
? What is the difference? What is write
and why its time is far more than the fsync()
operation?
**Question B**. What are sync files
? Which files: WAL files? Why sync files
are 159, but there are only 47 recycled files? What is the relation between these?
Thank you!
inivanoff1
(183 rep)
Sep 9, 2017, 02:11 PM
• Last activity: Oct 28, 2024, 07:32 PM
13
votes
2
answers
29687
views
Logging to a file in PostgreSQL windows service
I need an alternative for running the following command: `C:\xxx\pgsql\bin\pg_ctl" -D "C:\xxx\pgsql\data" -l "C:\yyy\log\pgsql.log" start`. This way the server starts and logs to `C:\yyy\log\pgsql.log`. When I try to register the server as a service though - no logging options are available and the...
I need an alternative for running the following command:
C:\xxx\pgsql\bin\pg_ctl" -D "C:\xxx\pgsql\data" -l "C:\yyy\log\pgsql.log" start
.
This way the server starts and logs to C:\yyy\log\pgsql.log
.
When I try to register the server as a service though - no logging options are available and the server logs to Event Viewer.
From pg_ctl documentation:
> pg_ctl register [-N servicename] [-U username] [-P password] [-D datadir] [-S a[uto] | d[emand] ] [-w] [-t seconds] [-s] [-o options]
How can I force the server to log into a file?
Note: I don't want to use one-click installer, I just want to work with unzipped binaries.
Vic
(449 rep)
Nov 27, 2012, 09:36 PM
• Last activity: Apr 18, 2024, 06:16 AM
0
votes
1
answers
2399
views
Show recent select queries on Firebird database file
Is there a way to log or query the n recent SELECT queries to an FDB database file? I have already tried using the MON$ and RDB$ tables but I didn't find anything useful.
Is there a way to log or query the n recent SELECT queries to an FDB database file? I have already tried using the MON$ and RDB$ tables but I didn't find anything useful.
arielnmz
(113 rep)
Nov 29, 2017, 06:46 AM
• Last activity: Jan 18, 2024, 03:04 PM
20
votes
8
answers
134380
views
Error Log Location?
Does MySQL write any log file? If yes, what is the path of it (on ubuntu/Linux)?
Does MySQL write any log file? If yes, what is the path of it (on ubuntu/Linux)?
Maulik Patel
(313 rep)
Aug 2, 2016, 01:33 PM
• Last activity: Jan 16, 2024, 04:12 PM
7
votes
3
answers
8702
views
How to log DML statements executed by a pl/pgsql function?
I have a pl/pgsql function (see below) that lists some fields and clears their contents using dynamically constructed UPDATE commands. When I set `log_statement = 'mod'`, I can see nothing on the log upon execution of the function with `SELECT fnct_clear_temp_fields();`. When I set `log_statement =...
I have a pl/pgsql function (see below) that lists some fields and clears their contents using dynamically constructed UPDATE commands.
When I set
log_statement = 'mod'
, I can see nothing on the log upon execution of the function with SELECT fnct_clear_temp_fields();
.
When I set log_statement = 'all'
, and execute the function I can see SELECT fnct_clear_temp_fields();
in the log, but not the underlying UPDATE commands.
Is there a way to have the UPDATE commands appear in the log as well ?
For info, here is the function:
CREATE OR REPLACE FUNCTION fnct_clear_temp_fields() RETURNS VOID AS $$
DECLARE
--Put into a cursor a view dynamically listing all user-defined fields beginning with 'temp_'
dataset_1 CURSOR FOR
SELECT
column_name,
table_name
FROM information_schema.tables
NATURAL JOIN information_schema.columns
WHERE
table_schema='public'
AND table_type='BASE TABLE'
AND column_name ~ '^temp_'
ORDER BY table_name,column_name;
--Record variable to go through each line of the view above
dataset_1_row RECORD;
BEGIN
OPEN dataset_1; --Open the cursor
FETCH dataset_1 INTO dataset_1_row; --first row of the view
WHILE FOUND LOOP
RAISE NOTICE 'Table: %, Column: %', dataset_1_row.table_name,dataset_1_row.column_name;
--Set to NULL the contents of the current 'temp_' column
EXECUTE 'UPDATE '||dataset_1_row.table_name||' SET '||dataset_1_row.column_name||'=NULL WHERE '||dataset_1_row.column_name||' IS NOT NULL';
FETCH dataset_1 INTO dataset_1_row; --next row please.
END LOOP; --while end
CLOSE dataset_1;
RETURN;
END;
$$ LANGUAGE plpgsql;
Sébastien Clément
(1825 rep)
Jul 10, 2014, 03:55 PM
• Last activity: Jul 17, 2023, 10:27 PM
0
votes
2
answers
1114
views
What's inside a redo record in Oracle's Redo log?
While reading the [Redo log wiki page][1], I was confronted with the following statement: > For example, if a user `UPDATE`s a salary-value in a table containing employee-related data, the DBMS generates a redo record containing change-vectors that describe changes to the data segment block for the...
While reading the Redo log wiki page , I was confronted with the following statement:
> For example, if a user
UPDATE
s a salary-value in a table containing employee-related data, the DBMS generates a redo record containing change-vectors that describe changes to the data segment block for the table. And if the user then COMMIT
s the update, Oracle generates another redo record and assigns the change a "system change number" (SCN).
In what way do change-vectors describe changes to the data? Do they contain a copy of the old and the updated data, or just the SQL statements? I also don't understand the difference between the first generated redo record and the one generated after the update is commited.
Mehdi Charife
(131 rep)
May 6, 2023, 02:10 PM
• Last activity: May 8, 2023, 08:09 AM
2
votes
1
answers
1938
views
Trying to restore a db2 database but stops at roll-forward with error 'SQL1265N'
I'm trying to restore a backup that is older than the current database. The restore went ok, but the roll forward didn't giving the following error: > SQL1265N The archive log file ".LOG" is not associated with the > current log sequence for database "" on node "0". I tried to roll forward to an ear...
I'm trying to restore a backup that is older than the current database.
The restore went ok, but the roll forward didn't giving the following error:
> SQL1265N The archive log file ".LOG" is not associated with the
> current log sequence for database "" on node "0".
I tried to roll forward to an earlier point in time, but it is required to restore to the end of the logs.
The backup is from another server but it's actually a duplicate (so name etc. are the same.)
What steps can I take to resolve this?
davejal
(236 rep)
Feb 5, 2016, 01:36 PM
• Last activity: Mar 18, 2023, 03:03 PM
Showing page 1 of 20 total questions