Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
0
answers
63
views
Kibana service on centos 7 server virtual machine failed to star with code=exited, status=1/FAILURE
I'm trying to setup Elasticsearch and Kibana on my local CentOS 7 virtual machine. I successfully installed ES and Kibana, elasticsearch is working and accessible at: 127.0.0.1:9200, but when the Kibana service is failed to start: Here's my kibana configuration [ **/etc/kibana/kibana.yml** ]: server...
I'm trying to setup Elasticsearch and Kibana on my local CentOS 7 virtual machine. I successfully installed ES and Kibana, elasticsearch is working and accessible at: 127.0.0.1:9200, but when the Kibana service is failed to start:
Here's my kibana configuration [ **/etc/kibana/kibana.yml** ]:
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://localhost:9200"]
logging:
appenders:
file:
type: file
fileName: /var/log/kibana/kibana.log
layout:
type: json
root:
appenders:
- default
- file
Here's what i get when run: **systemctl status kibana**:
● kibana.service - Kibana
Loaded: loaded (/usr/lib/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Mon 2024-08-05 01:46:45 EDT; 18min ago
Docs: https://www.elastic.co
Process: 4345 ExecStart=/usr/share/kibana/bin/kibana (code=exited, status=1/FAILURE)
Main PID: 4345 (code=exited, status=1/FAILURE)
Aug 05 01:46:41 localhost.localdomain systemd: Unit kibana.service entered failed state.
Aug 05 01:46:41 localhost.localdomain systemd: kibana.service failed.
Aug 05 01:46:45 localhost.localdomain systemd: kibana.service holdoff time over, scheduling restart.
Aug 05 01:46:45 localhost.localdomain systemd: Stopped Kibana.
Aug 05 01:46:45 localhost.localdomain systemd: start request repeated too quickly for kibana.service
Aug 05 01:46:45 localhost.localdomain systemd: Failed to start Kibana.
Aug 05 01:46:45 localhost.localdomain systemd: Unit kibana.service entered failed state.
Aug 05 01:46:45 localhost.localdomain systemd: kibana.service failed.
Here's what i tried:
1. Change the host address to my local IP and also to 127.0.0.1, localhost
2. Created a custom log file at /usr/share/kibana and change its ownership to kibana user and give that log file in kibana configuration
But no success, what might be wrong?
Abdul Rehman
(143 rep)
Aug 5, 2024, 06:08 AM
0
votes
1
answers
7218
views
keytool error: java.io.IOException: Invalid keystore format
I have a 3-node ELK stack (Elasticsearch v7.17). After a reboot, the Kibana web interface reports an error "Kibana server is not ready yet". The SSL certs were expired, so I re-created them (for the ELK CA, all 3 nodes, Kibana, and Logstash). However, the error persists, and `/var/log/kibana/kibana....
I have a 3-node ELK stack (Elasticsearch v7.17). After a reboot, the Kibana web interface reports an error "Kibana server is not ready yet".
The SSL certs were expired, so I re-created them (for the ELK CA, all 3 nodes, Kibana, and Logstash). However, the error persists, and
/var/log/kibana/kibana.log
reports an error
{"type":"log","@timestamp":"2023-03-29T17:19:39+02:00","tags":["error","elasticsearch-service"],"pid":8271,"message":"Unable to retrieve version information from Elasticsearch nodes. security_exception: [security_exception] Reason: unable to authenticate user [kibana] for REST request [/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip]"}
The command /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive -v
results in this output:
Running with configuration path: /etc/elasticsearch
Testing if bootstrap password is valid for http://10.0.0.1:9200/_security/_authenticate?pretty
{
"username" : "elastic",
"roles" : [
"superuser"
],
"full_name" : null,
"email" : null,
"metadata" : {
"_reserved" : true
},
"enabled" : true,
"authentication_realm" : {
"name" : "reserved",
"type" : "reserved"
},
"lookup_realm" : {
"name" : "reserved",
"type" : "reserved"
},
"authentication_type" : "realm"
}
Checking cluster health: http://10.0.0.1:9200/_cluster/health?pretty
{
"error" : {
"root_cause" : [
{
"type" : "master_not_discovered_exception",
"reason" : null
}
],
"type" : "master_not_discovered_exception",
"reason" : null
},
"status" : 503
}
Failed to determine the health of the cluster running at http://10.0.0.1:9200
Unexpected response code from calling GET http://10.0.0.1:9200/_cluster/health?pretty
Cause: master_not_discovered_exception
The Elasticsearch log say:
[2023-03-30T13:50:58,432][WARN ][o.e.d.PeerFinder ] [node1] address [10.0.0.2:9300], node [null], requesting [false] connection failed: [][10.0.0.2:9300] general node connection failure: handshake failed because connection reset
[2023-03-30T13:50:58,432][WARN ][o.e.t.TcpTransport ] [node1] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.0.0.1:60126, remoteAddress=node2.example.org/10.0.0.2:9300, profile=default}], closing connection
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors
No password was changed. The problem appears to be with the new SSL certificates. Therefore, I have created a new keystore via the command
/usr/share/elasticsearch/bin/elasticsearch-keystore create
and I'm trying to add the CA certificate (and then others) to it:
keytool -importcert -trustcacerts -noprompt -keystore /etc/elasticsearch/elasticsearch.keystore -file /etc/elasticsearch/certs/ca.crt
However, I get the following error:
keytool error: java.io.IOException: Invalid keystore format
I have converted the CA cert into PKCS12 and tried to import it in such format (ca.p12
), since the keystore is defined as of type PKCS12 in my config, but I get the same error.
What's wrong?
Excerpts of the /etc/elasticsearch/elasticsearch.yml
file:
xpack.security.transport.ssl.keystore.path: elasticsearch.keystore
xpack.security.transport.ssl.keystore.type: PKCS12
xpack.security.transport.ssl.truststore.path: elasticsearch.keystore
xpack.security.transport.ssl.truststore.type: PKCS12
xpack.security.transport.ssl.verification_mode: certificate
dr_
(32068 rep)
Mar 30, 2023, 08:21 AM
• Last activity: Mar 31, 2023, 03:20 PM
-1
votes
2
answers
34233
views
How do i completely uninstall ELK (Elasticsearch, Logstash, Kibana)?
I search on internet that we have to unistall each of the ELK part one by one like unistall stand-alone kibana, elastic search, and logstash. Is there any command which no need to unistall all of them one by one but using only one single command ? this is the package that i used in my source.list de...
I search on internet that we have to unistall each of the ELK part one by one like unistall stand-alone kibana, elastic search, and logstash. Is there any command which no need to unistall all of them one by one but using only one single command ?
this is the package that i used in my source.list
deb https://artifacts.elastic.co/packages/6.x/apt stable main
gagantous
(225 rep)
Dec 16, 2017, 03:27 PM
• Last activity: Mar 14, 2022, 10:55 PM
0
votes
1
answers
799
views
Auditbeat exclude /usr/sbin/cron
I'll tried to exclude event from cron jobs running that can be found with the KQL request : auditd.summary.how :"/usr/sbin/cron" My host does not running SE Linux, so the rules i found (put bellow) does not work : -a never,user -F subj_type=crond_t -a exit,never -F subj_type=crond_t I'll try this :...
I'll tried to exclude event from cron jobs running that can be found with the KQL request : auditd.summary.how :"/usr/sbin/cron"
My host does not running SE Linux, so the rules i found (put bellow) does not work :
-a never,user -F subj_type=crond_t
-a exit,never -F subj_type=crond_t
I'll try this :
-a never,user -F exe=/usr/sbin/cron
Not working too.
Thanks for help.
Inazo
(101 rep)
Feb 22, 2021, 01:38 PM
• Last activity: Feb 22, 2021, 03:14 PM
0
votes
1
answers
5465
views
Kibana service won't start
I am using Manjaro and installed elasticsearch and kibana with yay -S elasticsearch kibana Starting the elasticsearch service works well sudo systemctl start elasticsearch I've configured kibana with the basic settings in /etc/kibana/kibana.yml: server.port: 5601 server.host: "localhost" elasticsear...
I am using Manjaro and installed elasticsearch and kibana with
yay -S elasticsearch kibana
Starting the elasticsearch service works well
sudo systemctl start elasticsearch
I've configured kibana with the basic settings in /etc/kibana/kibana.yml:
server.port: 5601
server.host: "localhost"
elasticsearch.hosts: ["http://localhost:9200"]
But running kibana always fails:
❯❯❯ systemctl status kibana ✘ 7
● kibana.service - Kibana - dashboard for Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2020-11-13 12:10:13 CET; 5min ago
Process: 1609 ExecStart=/usr/bin/node --max-old-space-size=512 /usr/share/kibana/src/cli --config=/etc/kibana/kibana.yml (code=exited, status=1/FAILURE)
Main PID: 1609 (code=exited, status=1/FAILURE)
Nov 13 12:10:13 Trinity systemd: kibana.service: Scheduled restart job, restart counter is at 5.
Nov 13 12:10:13 Trinity systemd: Stopped Kibana - dashboard for Elasticsearch.
Nov 13 12:10:13 Trinity systemd: kibana.service: Start request repeated too quickly.
Nov 13 12:10:13 Trinity systemd: kibana.service: Failed with result 'exit-code'.
Nov 13 12:10:13 Trinity systemd: Failed to start Kibana - dashboard for Elasticsearch.
Maybe I am overseeing somethink. What should I do to start it properly?
journal -u kibana
Nov 13 12:10:10 Trinity systemd: Started Kibana - dashboard for Elasticsearch.
Nov 13 12:10:10 Trinity node: Kibana does not support the current Node.js version v15.0.1. Please use Node.js v10.22.1.
Nov 13 12:10:10 Trinity systemd: kibana.service: Main process exited, code=exited, status=1/FAILURE
Nov 13 12:10:10 Trinity systemd: kibana.service: Failed with result 'exit-code'.
Nov 13 12:10:11 Trinity systemd: kibana.service: Scheduled restart job, restart counter is at 1.
Nov 13 12:10:11 Trinity systemd: Stopped Kibana - dashboard for Elasticsearch.
Nov 13 12:10:11 Trinity systemd: Stopped Kibana - dashboard for Elasticsearch.
Nov 13 12:10:11 Trinity systemd: Started Kibana - dashboard for Elasticsearch.
Nov 13 12:10:11 Trinity node: Kibana does not support the current Node.js version v15.0.1. Please use Node.js v10.22.1.
Nov 13 12:10:11 Trinity systemd: kibana.service: Main process exited, code=exited, status=1/FAILURE
Nov 13 12:10:11 Trinity systemd: kibana.service: Failed with result 'exit-code'.
Nov 13 12:10:11 Trinity systemd: kibana.service: Scheduled restart job, restart counter is at 2.
Nov 13 12:10:11 Trinity systemd: Stopped Kibana - dashboard for Elasticsearch.
Nov 13 12:10:11 Trinity systemd: Started Kibana - dashboard for Elasticsearch.
Nov 13 12:10:11 Trinity node: Kibana does not support the current Node.js version v15.0.1. Please use Node.js v10.22.1.
Nov 13 12:10:11 Trinity systemd: kibana.service: Main process exited, code=exited, status=1/FAILURE
Nov 13 12:10:11 Trinity systemd: kibana.service: Failed with result 'exit-code'.
Nov 13 12:10:12 Trinity systemd: kibana.service: Scheduled restart job, restart counter is at 3.
Nov 13 12:10:12 Trinity systemd: Stopped Kibana - dashboard for Elasticsearch.
Nov 13 12:10:12 Trinity systemd: Started Kibana - dashboard for Elasticsearch.
Nov 13 12:10:12 Trinity node: Kibana does not support the current Node.js version v15.0.1. Please use Node.js v10.22.1.
Nov 13 12:10:12 Trinity systemd: kibana.service: Main process exited, code=exited, status=1/FAILURE
Nov 13 12:10:12 Trinity systemd: kibana.service: Failed with result 'exit-code'.
Nov 13 12:10:12 Trinity systemd: kibana.service: Scheduled restart job, restart counter is at 4.
Nov 13 12:10:12 Trinity systemd: Stopped Kibana - dashboard for Elasticsearch.
Nov 13 12:10:12 Trinity systemd: Started Kibana - dashboard for Elasticsearch.
Nov 13 12:10:12 Trinity node: Kibana does not support the current Node.js version v15.0.1. Please use Node.js v10.22.1.
Nov 13 12:10:12 Trinity systemd: kibana.service: Main process exited, code=exited, status=1/FAILURE
Nov 13 12:10:12 Trinity systemd: kibana.service: Failed with result 'exit-code'.
Nov 13 12:10:13 Trinity systemd: kibana.service: Scheduled restart job, restart counter is at 5.
Nov 13 12:10:13 Trinity systemd: Stopped Kibana - dashboard for Elasticsearch.
Nov 13 12:10:13 Trinity systemd: kibana.service: Start request repeated too quickly.
Nov 13 12:10:13 Trinity systemd: kibana.service: Failed with result 'exit-code'.
Nov 13 12:10:13 Trinity systemd: Failed to start Kibana - dashboard for Elasticsearch.
betaros
(103 rep)
Nov 13, 2020, 11:46 AM
• Last activity: Nov 20, 2020, 01:18 PM
0
votes
0
answers
321
views
Kibana showing error
I've installed kibana on linux. i've assign 5601 port as external host(log.gurukul.ninja). but when i run ./bin/kibana command in linux putty terminal i got this error. log [03:03:35.042] [warning][plugins-discovery] Expect plugin "id" in camelCase, but found: beats_management log [03:03:35.053] [wa...
I've installed kibana on linux. i've assign 5601 port as external host(log.gurukul.ninja). but when i run ./bin/kibana command in linux putty terminal i got this error.
log [03:03:35.042] [warning][plugins-discovery] Expect plugin "id" in camelCase, but found: beats_management
log [03:03:35.053] [warning][plugins-discovery] Expect plugin "id" in camelCase, but found: triggers_actions_ui
log [03:03:42.629] [info][plugins-service] Plugin "visTypeXy" is disabled.
log [03:03:42.630] [info][plugins-service] Plugin "auditTrail" is disabled.
log [03:03:42.871] [warning][config][deprecation] Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0."
log [03:03:43.044] [fatal][root] { Error: listen EADDRNOTAVAIL: address not available 175.101.13.126:5601
at Server.setupListenHandle [as _listen2] (net.js:1263:19)
at listenInCluster (net.js:1328:12)
at GetAddrInfoReqWrap.doListen (net.js:1461:7)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:61:10)
code: 'EADDRNOTAVAIL',
errno: 'EADDRNOTAVAIL',
syscall: 'listen',
address: '175.101.13.126',
port: 5601 }
Thank you !
Bhavin Varsur
(101 rep)
Oct 12, 2020, 05:03 AM
0
votes
0
answers
120
views
hearthbeat can't run
I have ELK server, and sending Heartbeats to server. I added an monitor in monitors.d directory, added `setup.dashboards.enabled: true` line in heartbeat.yml file, but after restarting heartbeat service, service working 2-3 secand and then sending me this output message > Ping remote services for av...
I have ELK server, and sending Heartbeats to server. I added an monitor in monitors.d directory, added
Can anybody help my to solve this issue
setup.dashboards.enabled: true
line in heartbeat.yml file, but after restarting heartbeat service, service working 2-3 secand and then sending me this output message
> Ping remote services for availability and log results to Elasticsearch or send to Logstash
I deleted my added line setup.dashboards.enabled: true
and have not get that messages, hearthbeats worked, sent all beats to ELK server, but didn't send monitor's index

khachikyan97
(115 rep)
Sep 19, 2020, 10:54 AM
2
votes
1
answers
3047
views
How to forward rsyslog logs from multiple locations to ELK and make it show in kibana?
[rsyslog server template consideration for multiple remote hosts ][1] ---> link to previously answered question [1]: https://unix.stackexchange.com/questions/477403/rsyslog-server-template-consideration-for-multiple-remote-hosts?newreg=277bb460c1704742acc159e9f33e0c9c @ meuh, I find this post very u...
rsyslog server template consideration for multiple remote hosts
---> link to previously answered question
@ meuh, I find this post very useful as am currently working on this configuration.
I have done the steps which are mentioned above and it's working fine.
I now have an ELK setup where rsyslog forwards the logs to it.
My templates are:
$template
templmesg,"/data01/RemoteLogs/DLF/%$YEAR%/%$MONTH%/%HOSTNAME%/%HOSTNAME%-%$DAY%-%$MONTH%-%$YEAR%.log"
$template mylogsec,"/data01/RemoteLogs/Logserver/%$YEAR%/%$MONTH%/%HOSTNAME%/%HOSTNAME%-%$DAY%-%$MONTH%-%$YEAR%.log"
if $fromhost startswith "10.100.10" then ?templmesg
& stop
if $fromhost startswith "10.100.112" then ?mylogsec
& stop
so I have two locations were logs are stored.
Because of multiple locations of logs storages like DLF and Logserver. Kibana from (ELK) does not show logs which is received from rsyslog. It only reads from one location of logs that is from DLF/ dir and not from Logserver.
Now I am stuck and don't know how to forward rsyslog logs from multiple locations to ELK and make it show in kibana; or, is there any specific configuration in rsyslog that I need to work out?
Below is the rsyslog configuration file:
# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html
#### MODULES ####
# The imjournal module bellow is now used as a message source instead of imuxsock.
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imjournal # provides access to the systemd journal
$ModLoad imklog # reads kernel messages (the same are read from journald)
#$ModLoad immark # provides --MARK-- message capability
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
#### GLOBAL DIRECTIVES ####
# Where to place auxiliary files
$WorkDirectory /var/lib/rsyslog
# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
#$ActionFileEnableSync on
# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
# Turn off message reception via local log socket;
# local messages are retrieved through imjournal now.
$OmitLocalLogging on
# File to store the position in the journal
$IMJournalStateFile imjournal.state
#### RULES ####
# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.* /dev/console
$template templmesg,"/data01/RemoteLogs/DLF/%$YEAR%/%$MONTH%/%HOSTNAME%/%HOSTNAME%-%$DAY%-%$MONTH%-%$YEAR%.log"
$template mylogsec,"/data01/RemoteLogs/DLF/Logserver/%$YEAR%/%$MONTH%/%HOSTNAME%/%HOSTNAME%-%$DAY%-%$MONTH%-%$YEAR%.log"
#if $fromhost startswith "10.100.10" then ?templmesg
#& stop
if $fromhost startswith "10.100.112" then ?mylogsec
& stop
local0.* ?templmesg
local1.* ?templmesg
local2.* ?templmesg
local3.* ?templmesg
local4.* ?templmesg
local5.* ?templmesg
local6.* ?templmesg
template(name="json-template"
type="list") {
constant(value="{")
constant(value="\"@timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"@version\":\"1")
constant(value="\",\"message\":\"") property(name="msg" format="json")
constant(value="\",\"sysloghost\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"programname\":\"") property(name="programname")
constant(value="\",\"procid\":\"") property(name="procid")
constant(value="\"}\n")
}
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
#
#$createDirs on
*.info;mail.none;authpriv.none;cron.none;local0.none;local1.none;local2.none;local3.none;local4.none;local5.none;local6.none ?templmesg
# The authpriv file has restricted access.
authpriv.* ?templmesg
# Log all the mail messages in one place.
mail.* ?templmesg
# Log cron stuff
cron.* ?templmesg
# Everybody gets emergency messages
#*.emerg :omusrmsg:*
# Save news errors of level crit and higher in a special file.
uucp,news.crit /var/log/spooler
# Save boot messages also to boot.log
local7.* ?templmesg
# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList # run asynchronously
#$ActionResumeRetryCount -1 # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
*.* @10.100.10.30:10514;json-template
# ### end of the forwarding rule ###
viggy9816
(23 rep)
Jan 16, 2019, 09:26 AM
• Last activity: Feb 23, 2019, 01:43 AM
0
votes
1
answers
537
views
Kibana- Want to split vertical bars based on my log fields
I have an application log file consists of following log levels: INFO, WARN, ERROR, DEBUG. Following filter criteria works fine in logstash config file: filter { grok { match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:log-level} \[%{DATA:thread_name}\]?-\[%{DATA:class}\] %{GREEDYDA...
I have an application log file consists of following log levels: INFO, WARN, ERROR, DEBUG. Following filter criteria works fine in logstash config file:
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:log-level} \[%{DATA:thread_name}\]?-\[%{DATA:class}\] %{GREEDYDATA:message}" }
}
date {
match => ["timestamp", "yyyy-MM-dd HH:mm:ss,SSS"]
target => "@timestamp"
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
I can see log-level field in "Discover" view of Kibana. However, I would like to visualize my app log as following: Split a vertical bar at a given moment to show how many ERROR logs, how many INFO logs, etc. are hit at a given moment.
When I go to "Visualize" tab and try to do "Add sub-buckets", "split bars" on X-axis, sub-aggregation="Terms"; I cannot see the field: "log-level" under the selectable "Field" options.
Could you please help me to split the bars based on log-level?
Thanks.
Murat
(335 rep)
Jan 9, 2017, 11:17 AM
• Last activity: Mar 21, 2018, 11:47 AM
0
votes
2
answers
758
views
How to configure Simple Event Correlator (SEC) to send info about mail delivery failure
My log file contains the following 3 log entries: 2017-11-16 15:50:45 1eFLV7-0003so-Cd R=1eFLV7-0003sZ-4v U=Debian-exim P=local S=1853 T="Mail delivery failed: returning message to sender" from 2017-11-16 15:50:45 1eFLV7-0003so-Cd => admins@xxx.com R=dnslookup T=remote_smtp H=smtp-51.xxx.com [xxx.xx...
My log file contains the following 3 log entries:
2017-11-16 15:50:45 1eFLV7-0003so-Cd R=1eFLV7-0003sZ-4v U=Debian-exim P=local S=1853 T="Mail delivery failed: returning message to sender" from
2017-11-16 15:50:45 1eFLV7-0003so-Cd => admins@xxx.com R=dnslookup T=remote_smtp H=smtp-51.xxx.com [xxx.xx.xx.xx] X=TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128
2017-11-16 15:50:45 1eFLV7-0003so-Cd Completed
I want to have an email sent to me, when an entry "Mail delivery failed*admins@.xxx.com" appears in the log file.
How can I achieve this?
Maybe SEC - Simple Event Correlator can help me?
But the below configuration(pattern) does not working for me.
type=SingleWithThreshold
ptype=RegExp
pattern=Mail delivery failed: returning message to sender*admins@xxx.com
desc=Problem with mail admin@xxx.com
action=pipe '%s' /usr/bin/mail -s 'ERROR SEND MAIL' me@xxx.com
window=1
thresh=1
debek
(237 rep)
Dec 28, 2017, 08:56 AM
• Last activity: Jan 1, 2018, 02:58 PM
0
votes
1
answers
200
views
Send specific log with specific pharse to my mail
I want to send specific log which has specific phrase to my mail. For example: ERROR LOG SOMETHING.COM IP XX.XXX.XXX.XXX PORT:2343 Bad XXXXXXX And if upper log has phase `SOMETHING.COM`, send me this log to email. Is it possible in `logwatch` or `kibana`? Or maybe something else?
I want to send specific log which has specific phrase to my mail.
For example:
ERROR LOG SOMETHING.COM IP XX.XXX.XXX.XXX PORT:2343 Bad XXXXXXX
And if upper log has phase
SOMETHING.COM
, send me this log to email.
Is it possible in logwatch
or kibana
? Or maybe something else?
debek
(237 rep)
Dec 8, 2017, 02:34 PM
• Last activity: Dec 9, 2017, 12:54 PM
0
votes
1
answers
629
views
How to enable kibana at startup
I just downloaded kibana-4.3 tar file , extract it and it works fine. But i want to enable the service at system startup and i got the error by using chkconfig command: service kibana does not support chkconfig Any workaround?
I just downloaded kibana-4.3 tar file , extract it and it works fine. But i want to enable the service at system startup and i got the error by using chkconfig command:
service kibana does not support chkconfig
Any workaround?
Ijaz Ahmad
(7382 rep)
Jan 27, 2016, 05:14 PM
• Last activity: Nov 25, 2017, 10:11 AM
2
votes
1
answers
265
views
Tile/Greographic Map in Kibana not working
I am trying to create a geographical map of my data in `Kibana` 5.01, and it does not work. The fact is that I do not even have the `geoip.field` that is required in the menu. I am sending data from IntelMQ, that is processed by `logstash` to get into elastic search. In the fields received I only ha...
I am trying to create a geographical map of my data in
Kibana
5.01, and it does not work. The fact is that I do not even have the geoip.field
that is required in the menu.
I am sending data from IntelMQ, that is processed by logstash
to get into elastic search. In the fields received I only have source_ip
What to do?
Rui F Ribeiro
(57882 rep)
Nov 26, 2016, 03:55 PM
• Last activity: May 12, 2017, 02:05 PM
Showing page 1 of 13 total questions