Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
2 answers
2792 views
How do I mount a disk on /var/log directory even if I have process writing on it?
I would like to mount a disk on /var/log, the thing is, there are some process/services writing into it, such as openvpn, or system logs. Is there a way to mount a filesystem without having to restart the machine, or stopping the service? Many thanks
I would like to mount a disk on /var/log, the thing is, there are some process/services writing into it, such as openvpn, or system logs. Is there a way to mount a filesystem without having to restart the machine, or stopping the service? Many thanks
LinuxEnthusiast (1 rep)
Aug 10, 2020, 10:10 AM • Last activity: Aug 1, 2025, 11:02 PM
0 votes
1 answers
2318 views
Debian 11 - audit logs appearing in /var/log/auth
I'm on a Debian 11 server and my audit logs are going into /var/log/audit/audit.log as well as in /var/log/auth.log. They are filling up my auth.log and they really should not be going there. Below are the relevant portion of my configs: /etc/rsyslog.conf kern.debug /var/log/kern.log daemon.* /var/l...
I'm on a Debian 11 server and my audit logs are going into /var/log/audit/audit.log as well as in /var/log/auth.log. They are filling up my auth.log and they really should not be going there. Below are the relevant portion of my configs: /etc/rsyslog.conf kern.debug /var/log/kern.log daemon.* /var/log/daemon.log *.info;cron,auth,authpriv.none /var/log/syslog cron.* /var/log/cron.log user.* /var/log/user.log auth,authpriv.* /var/log/auth.log /etc/audit/auditd.conf log_file = /var/log/audit.log I'm at a bit of a loss here as to what to do. How do I get my audit logs to send to /var/log/audit/audit.log only?
kathyl (46 rep)
Mar 14, 2023, 08:53 AM • Last activity: Jul 31, 2025, 06:05 AM
5 votes
1 answers
2612 views
Limit Openldap Transaction Log Disk Usage
Openldap (specifically version 2.4) stores transaction history in log files by default in the ldap data directory (so `/var/lib/ldap/log.###########`). Currently these log files take up a lot of space, are never removed automatically, and grow infinitely. Manual removal for old logs works fine, but...
Openldap (specifically version 2.4) stores transaction history in log files by default in the ldap data directory (so /var/lib/ldap/log.###########). Currently these log files take up a lot of space, are never removed automatically, and grow infinitely. Manual removal for old logs works fine, but I'd like to limit the amount of logs slapd keeps automatically. # MY SCENARIO # I know that these transaction logs are used to recover ldap in case of a catastrophic failure. In my scenario ldap is regularly wiped and populated via a script (this isn't used for system login accounts). Because of this I don't need to concern myself with recovery, in case of a failure it's acceptable to run the script again. On the other hand, the regular wipe/population of ldap includes a lot of transactions, so these transaction logs build up pretty quickly. # LOGROTATE # logrotate has potential here, but if the most recent transaction log is ever removed then slapd will fail to start (it will complain about needing to perform recovery). Because I can't rely on the log names (because slapd keeps many small logs, incrementing the log file number as it goes) I'd like to use the Berkeley DB settings which creates these logs. I can count on the access/creation dates (most recent modify date is the most recent transaction log), but I'd still prefer to use Berkeley if possible. # DB_CONFIG # The settings for the transactions logs are said to be controlled by the Berkeley DB settings in /var/lib/ldap/DB_CONFIG. The example DB_CONFIG that comes with openldap specifies some transaction log settings: set_lg_regionmax 262144 set_lg_bsize 2097152 According to the Oracle documentation on Berkeley: set_lg_regionmax: Set the size of the underlying logging area of the Berkeley DB environment, in bytes. The log region is used to store filenames, and so may need to be increased in size if a large number of files will be opened and registered with the specified Berkeley DB environment's log manager. So this seems to just set the size of the file that tracks the transaction log files. set_lg_bsize: Set the size of the in-memory log buffer, in bytes. This seems to control how much RAM is allotted to the transaction buffer. The log.########### files in the ldap data directory are all 10485760 bytes which seems to correspond closely to set_lg_bsize (10485760 / 5 = 2097152 = set_lg_bsize) though I'm not sure if this is a coincidence. My interpretation of this is that $lg_bsize amount of transaction history is stored in memory at a time. When this limit is exceeded it pushes some of the transaction history to the most recent log file, and creates a new log if the current log reaches a certain size. # DB_LOG_AUTOREMOVE # According to the Berkeley documentation transaction logs can be removed by setting the flag DB_LOG_AUTOREMOVE in the DB_CONFIG. DB_LOG_AUTOREMOVE: If set, Berkeley DB will automatically remove log files that are no longer needed. However when I added this to the DB_CONFIG: set_flags DB_LOG_AUTOREMOVE and restarted slapd I didn't notice a difference. I removed the old transaction logs and ran the ldap population script that I have, and was able to rack up 290MB in transaction logs. It still doesn't seem to be limiting the logs at all. The reason for this may be related to the phrase: that are no longer needed # Actual Question # How does one configure the automatic removal of slapd's transaction logs using the Berkeley DB DB_CONFIG file?
Centimane (4520 rep)
Oct 5, 2016, 05:14 PM • Last activity: Jul 30, 2025, 11:08 AM
1 votes
0 answers
25 views
mdadm --monitor --program option not working
I am trying to make mdadm call into a simple bash script which writes a message in the kernel log in case of a state change. The script is as follows, ``` # cat /tmp/test.sh #!/bin/bash echo "raid array status change" > /dev/kmsg ``` I have added the following to the mdadm config file ``` PROGRAM /t...
I am trying to make mdadm call into a simple bash script which writes a message in the kernel log in case of a state change. The script is as follows,
# cat /tmp/test.sh
#!/bin/bash

echo "raid array status change" > /dev/kmsg
I have added the following to the mdadm config file
PROGRAM /tmp/test.sh
ARRAY /dev/md0 UUID=
But when I do the test, it says the md0 is being picked up, and the correct program option is being used, but nothing gets printed in dmesg
mdadm --monitor --test /dev/md0 -1
Note that when I manually run the script, it prints the message in kernel log. My questions 1. Does the above process of calling the program depend on the mail setting also? Because my mail config is not set. 2. Any idea what could be wrong or missing?
Haris (113 rep)
Jul 30, 2025, 09:40 AM
1 votes
3 answers
3720 views
Only show entries of the last hour of log file
I have a huge logfile access.log with entries like: 192.11.111.111 - - [05/Mar/2021:00:00:02 +0100] "GET ..." 192.250.14.80 - - [05/Mar/2021:00:00:09 +0100] "GET ..." 12.249.66.42 - - [05/Mar/2021:00:00:13 +0100] "GET ..." How can I get/filter entries of the last hour only?
I have a huge logfile access.log with entries like: 192.11.111.111 - - [05/Mar/2021:00:00:02 +0100] "GET ..." 192.250.14.80 - - [05/Mar/2021:00:00:09 +0100] "GET ..." 12.249.66.42 - - [05/Mar/2021:00:00:13 +0100] "GET ..." How can I get/filter entries of the last hour only?
TmCrafz (11 rep)
Mar 5, 2021, 04:06 AM • Last activity: Jul 22, 2025, 11:05 PM
1 votes
1 answers
3400 views
How can I enable fail2ban and mod_secure logs to appear in logwatch on centos 6.4?
I have `logwatch` working fine but I don't see the `fail2ban` and `mod_secure` logs appearing in my `logwatch` logs. How do I enable this? What do I need to do to logwatch's configuration file? Below is the logwatch.conf file. ######################################################## # This was writt...
I have logwatch working fine but I don't see the fail2ban and mod_secure logs appearing in my logwatch logs. How do I enable this? What do I need to do to logwatch's configuration file? Below is the logwatch.conf file. ######################################################## # This was written and is maintained by: # Kirk Bauer # # Please send all comments, suggestions, bug reports, # etc, to kirk@kaybee.org. # ######################################################## # NOTE: # All these options are the defaults if you run logwatch with no # command-line arguments. You can override all of these on the # command-line. # You can put comments anywhere you want to. They are effective for the # rest of the line. # this is in the format of = . Whitespace at the beginning # and end of the lines is removed. Whitespace before and after the = sign # is removed. Everything is case *insensitive*. # Yes = True = On = 1 # No = False = Off = 0 # Default Log Directory # All log-files are assumed to be given relative to this directory. LogDir = /var/log # You can override the default temp directory (/tmp) here TmpDir = /var/cache/logwatch # Default person to mail reports to. Can be a local account or a # complete email address. Variable Print should be set to No to # enable mail feature. MailTo = root # WHen using option --multiemail, it is possible to specify a different # email recipient per host processed. For example, to send the report # for hostname host1 to user@example.com, use: #Mailto_host1 = user@example.com # Multiple recipients can be specified by separating them with a space. # Default person to mail reports from. Can be a local account or a # complete email address. MailFrom = Logwatch # If set to 'Yes', the report will be sent to stdout instead of being # mailed to above person. Print = Yes # if set, the results will be saved in instead of mailed # or displayed. #Save = /tmp/logwatch # Use archives? If set to 'Yes', the archives of logfiles # (i.e. /var/log/messages.1 or /var/log/messages.1.gz) will # be searched in addition to the /var/log/messages file. # This usually will not do much if your range is set to just # 'Yesterday' or 'Today'... it is probably best used with # By default this is now set to Yes. To turn off Archives uncomment this. #Archives = No # Range = All # The default time range for the report... # The current choices are All, Today, Yesterday Range = yesterday # The default detail level for the report. # This can either be Low, Med, High or a number. # Low = 0 # Med = 5 # High = 10 Detail = Low # The 'Service' option expects either the name of a filter # (in /usr/share/logwatch/scripts/services/*) or 'All'. # The default service(s) to report on. This should be left as All for # most people. Service = All # You can also disable certain services (when specifying all) Service = "-zz-network" # Prevents execution of zz-network service, which # prints useful network configuration info. Service = "-zz-sys" # Prevents execution of zz-sys service, which # prints useful system configuration info. Service = "-eximstats" # Prevents execution of eximstats service, which # is a wrapper for the eximstats program. # If you only cared about FTP messages, you could use these 2 lines # instead of the above: #Service = ftpd-messages # Processes ftpd messages in /var/log/messages #Service = ftpd-xferlog # Processes ftpd messages in /var/log/xferlog # Maybe you only wanted reports on PAM messages, then you would use: #Service = pam_pwdb # PAM_pwdb messages - usually quite a bit #Service = pam # General PAM messages... usually not many # You can also choose to use the 'LogFile' option. This will cause # logwatch to only analyze that one logfile.. for example: #LogFile = messages # will process /var/log/messages. This will run all the filters that # process that logfile. This option is probably not too useful to # most people. Setting 'Service' to 'All' above analyizes all LogFiles # anyways... # # By default we assume that all Unix systems have sendmail or a sendmail-like system. # The mailer code Prints a header with To: From: and Subject:. # At this point you can change the mailer to any thing else that can handle that output # stream. TODO test variables in the mailer string to see if the To/From/Subject can be set # From here with out breaking anything. This would allow mail/mailx/nail etc..... -mgt mailer = "sendmail -t" # # With this option set to 'Yes', only log entries for this particular host # (as returned by 'hostname' command) will be processed. The hostname # can also be overridden on the commandline (with --hostname option). This # can allow a log host to process only its own logs, or Logwatch can be # run once per host included in the logfiles. # # The default is to report on all log entries, regardless of its source host. # Note that some logfiles do not include host information and will not be # influenced by this setting. # #HostLimit = Yes # By default the cron daemon generates daily logwatch report # if you want to switch it off uncomment DailyReport tag. # The implicit value is Yes # # DailyReport = No # vi: shiftwidth=3 tabstop=3 et Output from the command sudo logwatch --debug High | grep -T100 'LogFiles that will be processed:' 000-*expandrepeats = 001-*onlyhost = 002-*applystddate = Logfile = /var/log/maillog Archive = /var/log/maillog.9.gz Archive = /var/log/maillog.8.gz Archive = /var/log/maillog.7.gz Archive = /var/log/maillog.6.gz Archive = /var/log/maillog.5.gz Archive = /var/log/maillog.4.gz Archive = /var/log/maillog.3.gz Archive = /var/log/maillog.29.gz Archive = /var/log/maillog.28.gz Archive = /var/log/maillog.27.gz Archive = /var/log/maillog.26.gz Archive = /var/log/maillog.25.gz Archive = /var/log/maillog.24.gz Archive = /var/log/maillog.23.gz Archive = /var/log/maillog.22.gz Archive = /var/log/maillog.21.gz Archive = /var/log/maillog.20.gz Archive = /var/log/maillog.2.gz Archive = /var/log/maillog.19.gz Archive = /var/log/maillog.18.gz Archive = /var/log/maillog.17.gz Archive = /var/log/maillog.16.gz Archive = /var/log/maillog.15.gz Archive = /var/log/maillog.14.gz Archive = /var/log/maillog.13.gz Archive = /var/log/maillog.12.gz Archive = /var/log/maillog.11.gz Archive = /var/log/maillog.10.gz Archive = /var/log/maillog.1.gz Archive = /var/log/maillog-20121230 Logfile Name: up2date Logfile Name: cisco Logfile Name: cron 001-*removeservice = anacron 000-*onlyhost = Logfile = /var/log/cron Archive = /var/log/cron.9.gz Archive = /var/log/cron.8.gz Archive = /var/log/cron.7.gz Archive = /var/log/cron.6.gz Archive = /var/log/cron.5.gz Archive = /var/log/cron.4.gz Archive = /var/log/cron.3.gz Archive = /var/log/cron.29.gz Archive = /var/log/cron.28.gz Archive = /var/log/cron.27.gz Archive = /var/log/cron.26.gz Archive = /var/log/cron.25.gz Archive = /var/log/cron.24.gz Archive = /var/log/cron.23.gz Archive = /var/log/cron.22.gz Archive = /var/log/cron.21.gz Archive = /var/log/cron.20.gz Archive = /var/log/cron.2.gz Archive = /var/log/cron.19.gz Archive = /var/log/cron.18.gz Archive = /var/log/cron.17.gz Archive = /var/log/cron.16.gz Archive = /var/log/cron.15.gz Archive = /var/log/cron.14.gz Archive = /var/log/cron.13.gz Archive = /var/log/cron.12.gz Archive = /var/log/cron.11.gz Archive = /var/log/cron.10.gz Archive = /var/log/cron.1.gz Archive = /var/log/cron-20121230 Logfile Name: yum Logfile = /var/log/yum.log Logfile Name: tac_acc 000-*applystddate = Logfile Name: exim Logfile Name: syslog 001-*removeservice = talkd,telnetd,inetd,nfsd,/sbin/mingetty 000-*expandrepeats = 003-*applystddate = 002-*onlyhost = Logfile Name: dnssec 000-*expandrepeats = 001-*applybinddate = Logfile Name: netscreen 000-*applystddate = Logfile Name: autorpm Logfile Name: dpkg 000-*applyeurodate = LogFiles that will be processed: = maillog = qmail-pop3d-current = denyhosts = secure = messages = eventlog = qmail-send-current = none = samba = clam-update = extreme-networks = resolver = qmail-pop3ds-current = netopia = fail2ban = pix = xferlog = cisco = cron = netscreen = dnssec = qmail-smtpd-current = windows = vsftpd = php = emerge = http = bfd = sonicwall = iptables = pureftp = rt314 = up2date = yum = tac_acc = exim = autorpm = dpkg Made Temp Dir: /var/cache/logwatch/logwatch.tOKLrjds with tempdir export LOGWATCH_DATE_RANGE='yesterday' export LOGWATCH_OUTPUT_TYPE='unformatted' export LOGWATCH_TEMP_DIR='/var/cache/logwatch/logwatch.tOKLrjds/' export LOGWATCH_DEBUG='10' Preprocessing LogFile: maillog '/var/cache/logwatch/logwatch.tOKLrjds/maillog-archive' '/var/log/maillog' | /usr/bin/perl /usr/share/logwatch/scripts/shared/expandrepeats ''| /usr/bin/perl /usr/share/logwatch/scripts/shared/onlyhost ''| /usr/bin/perl /usr/share/logwatch/scripts/shared/applystddate ''>/var/cache/logwatch/logwatch.tOKLrjds/maillog Preprocessing LogFile: secure '/var/cache/logwatch/logwatch.tOKLrjds/secure-archive' '/var/log/secure' | /usr/bin/perl /usr/share/logwatch/scripts/shared/expandrepeats ''| /usr/bin/perl /usr/share/logwatch/scripts/shared/onlyhost ''| /usr/bin/perl /usr/share/logwatch/scripts/shared/applystddate ''>/var/cache/logwatch/logwatch.tOKLrjds/secure Preprocessing LogFile: messages '/var/cache/logwatch/logwatch.tOKLrjds/messages-archive' '/var/log/messages' | /usr/bin/perl /usr/share/logwatch/scripts/shared/expandrepeats ''| /usr/bin/perl /usr/share/logwatch/scripts/shared/removeservice 'talkd,telnetd,inetd,nfsd,/sbin/mingetty,netscreen,netscreen'| /usr/bin/perl /usr/share/logwatch/scripts/shared/onlyhost ''| /usr/bin/perl /usr/share/logwatch/scripts/shared/applystddate ''>/var/cache/logwatch/logwatch.tOKLrjds/messages Preprocessing LogFile: cron '/var/cache/logwatch/logwatch.tOKLrjds/cron-archive' '/var/log/cron' | /usr/bin/perl /usr/share/logwatch/scripts/shared/onlyhost ''| /usr/bin/perl /usr/share/logwatch/scripts/shared/removeservice 'anacron'| /usr/bin/perl /usr/share/logwatch/scripts/logfiles/cron/applydate>/var/cache/logwatch/logwatch.tOKLrjds/cron Preprocessing LogFile: http '/var/cache/logwatch/logwatch.tOKLrjds/http-archive' '/var/log/httpd/access_log' | /usr/bin/perl /usr/share/logwatch/scripts/shared/expandrepeats ''| /usr/bin/perl /usr/share/logwatch/scripts/shared/applyhttpdate ''>/var/cache/logwatch/logwatch.tOKLrjds/http Preprocessing LogFile: yum '/var/log/yum.log' | /usr/bin/perl /usr/share/logwatch/scripts/logfiles/yum/applydate>/var/cache/logwatch/logwatch.tOKLrjds/yum Processing Service: amavis ( cat /var/cache/logwatch/logwatch.tOKLrjds/maillog | /usr/bin/perl /usr/share/logwatch/scripts/shared/onlyservice '(amavis|dccproc)' |/usr/bin/perl /usr/share/logwatch/scripts/shared/removeheaders '' |/usr/bin/perl /usr/share/logwatch/scripts/services/amavis) 2>&1 export clamav_ignoreunmatched='0' export clamav_ignoreunmatched='0' Processing Service: clamav-milter ( cat /var/cache/logwatch/logwatch.tOKLrjds/maillog | /usr/bin/perl /usr/share/logwatch/scripts/shared/onlyservice 'clamav-milter' |/usr/bin/perl /usr/share/logwatch/scripts/shared/removeheaders '' |/usr/bin/perl /usr/share/logwatch/scripts/services/clamav-milter) 2>&1 export courier_enable='1' export courier_ip_lookup='0' export courier_printmailqueue='0' export courier_tables='0' Processing Service: courier ( cat /var/cache/logwatch/logwatch.tOKLrjds/maillog | /usr/bin/perl /usr/share/logwatch/scripts/services/courier) 2>&1 Processing Service: cron ( cat /var/cache/logwatch/logwatch.tOKLrjds/cron | /usr/bin/perl /usr/share/logwatch/scripts/services/cron) 2>&1 Processing Service: dovecot ( cat /var/cache/logwatch/logwatch.tOKLrjds/maillog | /usr/bin/perl /usr/share/logwatch/scripts/shared/onlyservice '(imap-login|pop3-login|dovecot)' |/usr/bin/perl /usr/share/logwatch/scripts/services/dovecot) 2>&1 export ftpd_ignore_unmatched='0' export detail_transfer='1' export http_ignore_error_hacks='0' export http_user_display='0' Processing Service: http ( cat /var/cache/logwatch/logwatch.tOKLrjds/http | /usr/bin/perl /usr/share/logwatch/scripts/services/http) 2>&1 Processing Service: imapd ( cat /var/cache/logwatch/logwatch.tOKLrjds/maillog | /usr/bin/perl /usr/share/logwatch/scripts/shared/onlyservice '(imapd|imapd-ssl|imapsd)' |/usr/bin/perl /usr/share/logwatch/scripts/shared/removeheaders '' |/usr/bin/perl /usr/share/logwatch/scripts/services/imapd) 2>&1 Processing Service: in.qpopper ( cat /var/cache/logwatch/logwatch.tOKLrjds/maillog | /usr/bin/perl /usr/share/logwatch/scripts/shared/multiservice 'in.qpopper,qpopper' |/usr/bin/perl /usr/share/logwatch/scripts/shared/removeheaders '' |/usr/bin/perl /usr/share/logwatch/scripts/services/in.qpopper) 2>&1 Processing Service: ipop3d
biz14 (471 rep)
Jul 22, 2013, 02:56 PM • Last activity: Jul 19, 2025, 08:10 PM
0 votes
2 answers
2462 views
ufw not logging all connections as expected
I am trying to setup logging on ubuntu server 20.04.4 using ufw, but I'll take non-ufw advice as well. I am running a test https server on port 20000 and want to log all connections to it. Here's what I did. ``` ufw allow log-all 20000/tcp ``` Here's my ufw status: ``` To Action From -- --------- --...
I am trying to setup logging on ubuntu server 20.04.4 using ufw, but I'll take non-ufw advice as well. I am running a test https server on port 20000 and want to log all connections to it. Here's what I did.
ufw allow log-all 20000/tcp
Here's my ufw status:
To          Action          From          
--          ---------       -----
20000/tcp   ALLOW IN        Anywhere         (log-all)
Now the only records I see in my log file (/var/log/ufw.log) are the "blocks" being generated from other rules. I am able to connect to the server from outside, and my test server runs fine (*delivers the content I need*). But I just don't see any records pertaining to this rule in ufw logs. What might I be missing? Edit 1: Since I cannot comment yet, I am reacting to @mashuptwice's advice here. My ufw logging is on (low). If I did
ufw logging medium
wouldn't that apply to all rules? I only need extra logging for this specific rule.
Dr Phil (139 rep)
Feb 28, 2022, 03:39 PM • Last activity: Jul 19, 2025, 01:04 PM
73 votes
8 answers
39914 views
monitor files (à la tail -f) in an entire directory (even new ones)
I normally watch many logs in a directory doing `tail -f directory/*`. The problem is that a new log is created after that, it will not show in the screen (because `*` was expanded already). Is there a way to monitor every file in a directory, even those that are created after the process has starte...
I normally watch many logs in a directory doing tail -f directory/*. The problem is that a new log is created after that, it will not show in the screen (because * was expanded already). Is there a way to monitor every file in a directory, even those that are created after the process has started?
santiagozky (833 rep)
May 31, 2012, 10:48 AM • Last activity: Jul 16, 2025, 11:52 PM
12 votes
2 answers
11336 views
Save past GNU screen output to a file
I have had a GNU screen session running for days. I find myself in the situation that I need to save the terminal contents (which I can scroll up to see) into a file. Is this possible? I estimate it to be below 5000 lines. I found a way to set up screen to log *future* output to a file. But in this...
I have had a GNU screen session running for days. I find myself in the situation that I need to save the terminal contents (which I can scroll up to see) into a file. Is this possible? I estimate it to be below 5000 lines. I found a way to set up screen to log *future* output to a file. But in this case, I need to also save past output (or as much of it as is present).
Szabolcs (223 rep)
Oct 18, 2019, 06:22 PM • Last activity: Jul 12, 2025, 07:14 AM
5 votes
2 answers
7240 views
Log the output of Expect command
I have made the below expect script and I need to log the output of that script. SOURCE_FILE=`ls -l *.txt --time-style=+%D | grep ${DT} | grep -v '^d' | awk '{print $NF}' ` if [ -n "${SOURCE_FILE}" ] then cp -p ${SOURCE_FILE} ${T_FILES} /usr/bin/expect " send "put /opt/AppServer/ES_TEST/todays_repor...
I have made the below expect script and I need to log the output of that script. SOURCE_FILE=ls -l *.txt --time-style=+%D | grep ${DT} | grep -v '^d' | awk '{print $NF}' if [ -n "${SOURCE_FILE}" ] then cp -p ${SOURCE_FILE} ${T_FILES} /usr/bin/expect" send "put /opt/AppServer/ES_TEST/todays_report/*.txt\r" expect "sftp>" send "bye\r" expect EOD EOD else echo "No Files to copy" >> ${LOGFILE} fi I need to log the output of expect command in ${LOGFILE}. How can It be done? I have tried adding the below things, it doesn't work. What could be done? /usr/bin/expect> ${LOGFILE} 2>&1 set timeout 60 spawn sftp $ES_SFTP_USER@$ES_SFTP_HOST_NAME:$R_LOCATION expect "*?assword:" send "$password\r" expect "sftp>" send "put /opt/AppServer/ES_TEST/todays_report/*.txt\r" expect "sftp>" send "bye\r" expect EOD EOD
sabarish jackson (628 rep)
Jul 11, 2016, 05:59 AM • Last activity: Jul 12, 2025, 05:07 AM
1 votes
1 answers
1898 views
/var/lib/puppet/state/agent_catalog_run.lock exists
I'm seeing the following error on CentOS 6.4: # puppet agent --test Run of Puppet configuration client already in progress; skipping (/var/lib/puppet/state/agent_catalog_run.lock exists) What should I do about it?
I'm seeing the following error on CentOS 6.4: # puppet agent --test Run of Puppet configuration client already in progress; skipping (/var/lib/puppet/state/agent_catalog_run.lock exists) What should I do about it?
SunLynx (101 rep)
May 18, 2015, 08:06 PM • Last activity: Jul 8, 2025, 09:08 AM
1 votes
1 answers
30 views
transmission-gtk spamming dmesg with messages about /proc/sys/net/ipv6/conf/all/disable_ipv6
I'm using transmission-gtk 4.1.0-beta.2 on Devuan GNU/Linux Excalibur. My dmesg log is spammed with the following kind of message: ``` [Jul 4 14:47] audit: type=1400 audit(1751629628.491:75895): apparmor="ALLOWED" operation="open" class="file" profile="transmission-gtk" name="/proc/sys/net/ipv6/conf...
I'm using transmission-gtk 4.1.0-beta.2 on Devuan GNU/Linux Excalibur. My dmesg log is spammed with the following kind of message:
[Jul 4 14:47] audit: type=1400 audit(1751629628.491:75895): apparmor="ALLOWED" operation="open"
class="file" profile="transmission-gtk" name="/proc/sys/net/ipv6/conf/all/disable_ipv6"
pid=20126 comm="transmission-gt" requested_mask="r" denied_mask="r" fsuid=1000 ouid=0
` (originally all in one line, I broke it here for readability.) My network connection does have an IPv6 address (along with IPv4), even though I'm not intentionally making use of it. Anyway, I would like to have transmission-gtk stop trying to mess with it. Is that possible? If not, can I at least silence the repeating log message? Or get to only show up just once? --- FYI, on my system, I have:
# ls -la /proc/sys/net/ipv6/conf/all/disable_ipv6 
-rw-r--r-- 1 root root 0 Jul  4 13:59 /proc/sys/net/ipv6/conf/all/disable_ipv6
einpoklum (10753 rep)
Jul 4, 2025, 11:53 AM • Last activity: Jul 5, 2025, 09:30 AM
1 votes
1 answers
3157 views
How do I forward particular logs under a directory using rsyslog?
Trying to froward following logs from `/home/ddlog/ms/logs/execution_logs/_abc-xyz-ms_*` to VMware vRealize Log Insight using rsyslog. For some reason that does not seem to be working. I have tried using imfile - # Now load the external log $InputFileName /home/ddlog/ms/logs/execution_logs/_abc-xyz-...
Trying to froward following logs from /home/ddlog/ms/logs/execution_logs/_abc-xyz-ms_* to VMware vRealize Log Insight using rsyslog. For some reason that does not seem to be working. I have tried using imfile - # Now load the external log $InputFileName /home/ddlog/ms/logs/execution_logs/_abc-xyz-ms_* $InputFileTag ddlog $InputFileStateFile ms $InputFileSeverity debug $InputFileFacility local7 $InputRunFileMonitor local7.* @@hostname:514 Commenting out the imfile and updating the rsyslog.conf with the *.* @@remote-host:514 seems to work perfectly fine but I am more concerned about forwarding specific logs.
Surya (21 rep)
Apr 24, 2019, 08:58 PM • Last activity: Jun 30, 2025, 05:04 PM
102 votes
5 answers
275829 views
Where are my sshd logs?
I can't find my sshd logs in the standard places. What I've tried: - Not in `/var/log/auth.log` - Not in `/var/log/secure` - Did a system search for `'auth.log'` and found nothing - I've set `/etc/ssh/sshd_config` to explicitly use `SyslogFacility AUTH` and `LogLevel INFO` and restarted sshd and sti...
I can't find my sshd logs in the standard places. What I've tried: - Not in /var/log/auth.log - Not in /var/log/secure - Did a system search for 'auth.log' and found nothing - I've set /etc/ssh/sshd_config to explicitly use SyslogFacility AUTH and LogLevel INFO and restarted sshd and still can't find them. I'm using OpenSSH 6.5p1-2 on Arch Linux.
HXCaine (1247 rep)
Feb 8, 2014, 01:06 PM • Last activity: Jun 26, 2025, 11:59 AM
28 votes
4 answers
76840 views
Debian - auth.log missing from /var/log
I am learning Debian on RaspberryPI. I've installed 'logwatch' and 'fail2ban' recently and those two were working Great! A few days ago I've spotted that I don't have a file "auth.log" but I do have "auth.log.gz1" etc. (so archive data) I've used command: touch auth.log to create this file, than `ch...
I am learning Debian on RaspberryPI. I've installed 'logwatch' and 'fail2ban' recently and those two were working Great! A few days ago I've spotted that I don't have a file "auth.log" but I do have "auth.log.gz1" etc. (so archive data) I've used command: touch auth.log to create this file, than chown root:adm to change its premissions. However this file is still not working - I can't see any entry in to for the last 2 days even if I was loging in trough SSH. Can you advise: 1. why this file is gone? where to look for a reasons? 2. how to fix the issue, so all my SSH connections (and attacks) will be recorded? PS. pi@pi ~ $ uname -a Linux pi.local 3.10.25+ #622 PREEMPT Fri Jan 3 18:41:00 GMT 2014 armv6l GNU/Linux
Adam (381 rep)
Jan 7, 2014, 04:40 PM • Last activity: Jun 22, 2025, 08:31 PM
44 votes
4 answers
102874 views
Cron log on debian systems
On Redhat systems cron logs into `/var/log/cron` file. What is the equivalent of that file on Debian systems?
On Redhat systems cron logs into /var/log/cron file. What is the equivalent of that file on Debian systems?
xralf (15189 rep)
Apr 21, 2011, 12:29 PM • Last activity: Jun 19, 2025, 11:53 PM
5 votes
2 answers
4355 views
How can I debug Nautilus crashes?
Is there a log file I can peruse, or settings somewhere that might give me a clue? I'm running GNOME nautilus 3.14.3 ("Files") in Ubuntu 16.04.
Is there a log file I can peruse, or settings somewhere that might give me a clue? I'm running GNOME nautilus 3.14.3 ("Files") in Ubuntu 16.04.
jvriesem (151 rep)
Aug 8, 2016, 08:17 PM • Last activity: Jun 19, 2025, 06:08 AM
0 votes
1 answers
3576 views
Why doesn't supervisor log the output of a child process?
When I run the script `server.sh` (runs a quake dedicated server) in the terminal it will create initialization output like ------- Game Initialization ------- gamename: baseqz gamedate: May 25 2016 initializing access list... loaded 0 steam ids into the access list Not logging to disk. 0 teams with...
When I run the script server.sh (runs a quake dedicated server) in the terminal it will create initialization output like ------- Game Initialization ------- gamename: baseqz gamedate: May 25 2016 initializing access list... loaded 0 steam ids into the access list Not logging to disk. 0 teams with 0 entities 21 items registered And when some person connects to the server then this script will output something like Person xyz has connected I want supervisor to manage this *server.sh* script but supervisor does not output any kind of initialization messages or connecting message (if someone connects). Stdout and stderr are both logged. Why does supervisor not log the output of *server.sh*? Here is my **supervisord.conf** [program:prog] command=/home/user/.steam/steamapps/common/qlds/server.sh stdout_logfile=stdout.txt stderr_logfile=stdout.txt [supervisord] nodaemon=true [supervisorctl] serverurl=http://127.0.0.1:9001 [inet_http_server] port=127.0.0.1:9001 [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface EDIT: I noticed a strange behaviour of server.sh server.sh &> out.txt generates no output in out.txt (no Game initialization, nothink). But only after i typed in stdin "quit" (server.sh quits now) then the Game initialization and so on got written into file out.txt
mankind86 (21 rep)
Jan 14, 2022, 11:49 AM • Last activity: Jun 16, 2025, 08:05 PM
22 votes
2 answers
15090 views
Why are utmp, wtmp and btmp called as they are?
I know what these files record, but I'd like to known what 'u','w','b' prefixes mean. Can anyone shed some light?
I know what these files record, but I'd like to known what 'u','w','b' prefixes mean. Can anyone shed some light?
sgx1 (321 rep)
Apr 30, 2014, 05:53 AM • Last activity: Jun 15, 2025, 12:41 PM
1 votes
1 answers
3331 views
rsyslog discard message
I'm trying to discard any "kernel: nfs: Deprecated parameter 'intr'" messages from /var/log/messages Rsyslog version: 8.1911.0-6.el8 In my /etc/rsyslog.conf file I have the following: module(load="imuxsock" # provides support for local system logging (e.g. via logger command) SysSock.Use="off") # Tu...
I'm trying to discard any "kernel: nfs: Deprecated parameter 'intr'" messages from /var/log/messages Rsyslog version: 8.1911.0-6.el8 In my /etc/rsyslog.conf file I have the following: module(load="imuxsock" # provides support for local system logging (e.g. via logger command) SysSock.Use="off") # Turn off message reception via local log socket; # local messages are retrieved through imjournal now. module(load="imjournal" # provides access to the systemd journal StateFile="imjournal.state") # File to store the position in the journal global(workDirectory="/var/lib/rsyslog") module(load="builtin:omfile" Template="RSYSLOG_TraditionalFileFormat") include(file="/etc/rsyslog.d/*.conf" mode="optional") :msg, contains, "nfs: Deprecated parameter" stop *.info;mail.none;authpriv.none;cron.none /var/log/messages authpriv.* /var/log/secure mail.* -/var/log/maillog cron.* /var/log/cron *.emerg :omusrmsg:* uucp,news.crit /var/log/spooler local7.* /var/log/boot.log The line that should discard the messages is:
:msg, contains, "nfs: Deprecated parameter"  stop
I still see that the messages getting logged. Any ideas? PS., I do have additional conf files in /etc/rsyslog.d/ if that matters.
wabbajack001 (91 rep)
Oct 5, 2021, 03:24 PM • Last activity: Jun 12, 2025, 09:07 AM
Showing page 1 of 20 total questions