Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
1
answers
7218
views
keytool error: java.io.IOException: Invalid keystore format
I have a 3-node ELK stack (Elasticsearch v7.17). After a reboot, the Kibana web interface reports an error "Kibana server is not ready yet". The SSL certs were expired, so I re-created them (for the ELK CA, all 3 nodes, Kibana, and Logstash). However, the error persists, and `/var/log/kibana/kibana....
I have a 3-node ELK stack (Elasticsearch v7.17). After a reboot, the Kibana web interface reports an error "Kibana server is not ready yet".
The SSL certs were expired, so I re-created them (for the ELK CA, all 3 nodes, Kibana, and Logstash). However, the error persists, and
/var/log/kibana/kibana.log
reports an error
{"type":"log","@timestamp":"2023-03-29T17:19:39+02:00","tags":["error","elasticsearch-service"],"pid":8271,"message":"Unable to retrieve version information from Elasticsearch nodes. security_exception: [security_exception] Reason: unable to authenticate user [kibana] for REST request [/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip]"}
The command /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive -v
results in this output:
Running with configuration path: /etc/elasticsearch
Testing if bootstrap password is valid for http://10.0.0.1:9200/_security/_authenticate?pretty
{
"username" : "elastic",
"roles" : [
"superuser"
],
"full_name" : null,
"email" : null,
"metadata" : {
"_reserved" : true
},
"enabled" : true,
"authentication_realm" : {
"name" : "reserved",
"type" : "reserved"
},
"lookup_realm" : {
"name" : "reserved",
"type" : "reserved"
},
"authentication_type" : "realm"
}
Checking cluster health: http://10.0.0.1:9200/_cluster/health?pretty
{
"error" : {
"root_cause" : [
{
"type" : "master_not_discovered_exception",
"reason" : null
}
],
"type" : "master_not_discovered_exception",
"reason" : null
},
"status" : 503
}
Failed to determine the health of the cluster running at http://10.0.0.1:9200
Unexpected response code from calling GET http://10.0.0.1:9200/_cluster/health?pretty
Cause: master_not_discovered_exception
The Elasticsearch log say:
[2023-03-30T13:50:58,432][WARN ][o.e.d.PeerFinder ] [node1] address [10.0.0.2:9300], node [null], requesting [false] connection failed: [][10.0.0.2:9300] general node connection failure: handshake failed because connection reset
[2023-03-30T13:50:58,432][WARN ][o.e.t.TcpTransport ] [node1] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.0.0.1:60126, remoteAddress=node2.example.org/10.0.0.2:9300, profile=default}], closing connection
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors
No password was changed. The problem appears to be with the new SSL certificates. Therefore, I have created a new keystore via the command
/usr/share/elasticsearch/bin/elasticsearch-keystore create
and I'm trying to add the CA certificate (and then others) to it:
keytool -importcert -trustcacerts -noprompt -keystore /etc/elasticsearch/elasticsearch.keystore -file /etc/elasticsearch/certs/ca.crt
However, I get the following error:
keytool error: java.io.IOException: Invalid keystore format
I have converted the CA cert into PKCS12 and tried to import it in such format (ca.p12
), since the keystore is defined as of type PKCS12 in my config, but I get the same error.
What's wrong?
Excerpts of the /etc/elasticsearch/elasticsearch.yml
file:
xpack.security.transport.ssl.keystore.path: elasticsearch.keystore
xpack.security.transport.ssl.keystore.type: PKCS12
xpack.security.transport.ssl.truststore.path: elasticsearch.keystore
xpack.security.transport.ssl.truststore.type: PKCS12
xpack.security.transport.ssl.verification_mode: certificate
dr_
(32068 rep)
Mar 30, 2023, 08:21 AM
• Last activity: Mar 31, 2023, 03:20 PM
0
votes
0
answers
1083
views
Unable to send logs from rsyslog to logstash and elasticsearch
I am using ubuntu and I installed the ELK stack version 8.5 on the same machine. I did the necessary configurations for each of the services(logstash, elasticsearch, kibana) and I equally configured rsyslog to send logs to logstash(defining an index to be created each day) and from logstash to elast...
I am using ubuntu and I installed the ELK stack version 8.5 on the same machine. I did the necessary configurations for each of the services(logstash, elasticsearch, kibana) and I equally configured rsyslog to send logs to logstash(defining an index to be created each day) and from logstash to elasticsearch. The issue is that I can't see any log in elasticsearch when rsyslog is at input in logstash meanwhile when I use the file input with the file's path, it works(I can see the index in elasticsearch and kibana too) but I also realised it doesn't show for some files. So it works for some files and it doesn't work for others. What can be the issue then?
rsyslog.conf file
#################
#### MODULES ####
#################
module(load="imuxsock") # provides support for local system logging
#module(load="immark") # provides --MARK-- message capability
# provides UDP syslog reception
module(load="imudp")
input(type="imudp" port="514")
# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="514")
# provides kernel logging support and enable non-kernel klog messages
module(load="imklog" permitnonkernelfacility="on")
Configuration files in the /etc/rsyslog.d directory
01-json-template.conf file
template(name="json-template"
type="list") {
constant(value="{")
constant(value="\"@timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"@version\":\"1")
constant(value="\",\"message\":\"") property(name="msg" format="json")
constant(value="\",\"sysloghost\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"programname\":\"") property(name="programname")
constant(value="\",\"procid\":\"") property(name="procid")
constant(value="\"}\n")
}
50-default.conf file
# Default rules for rsyslog.
#
# For more information see rsyslog.conf(5) and /etc/rsyslog.conf
*.* @localhost:514
#
# First some standard log files. Log by facility.
#
auth,authpriv.* /var/log/auth.log
*.*;auth,authpriv.none -/var/log/syslog
#cron.* /var/log/cron.log
#daemon.* -/var/log/daemon.log
kern.* -/var/log/kern.log
#lpr.* -/var/log/lpr.log
mail.* -/var/log/mail.log
#user.* -/var/log/user.log
#
# Logging for the mail system. Split it up so that
# it is easy to write scripts to parse these files.
#
#mail.info -/var/log/mail.info
#mail.warn -/var/log/mail.warn
mail.err /var/log/mail.err
60-output.conf file
# This line sends all lines to defined IP address at port 10514,
# using the "json-template" format template
*.* @localhost:10514;json-template
logstash configuration file for rsyslog
input {
udp {
host => "localhost"
port => 10514
codec => "json"
type => "rsyslog"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "rsyslog-%{+YYYY.MM.dd}"
}
}
Ngouaba Rosalie
(31 rep)
Nov 29, 2022, 09:44 AM
• Last activity: Nov 29, 2022, 02:12 PM
-1
votes
2
answers
34233
views
How do i completely uninstall ELK (Elasticsearch, Logstash, Kibana)?
I search on internet that we have to unistall each of the ELK part one by one like unistall stand-alone kibana, elastic search, and logstash. Is there any command which no need to unistall all of them one by one but using only one single command ? this is the package that i used in my source.list de...
I search on internet that we have to unistall each of the ELK part one by one like unistall stand-alone kibana, elastic search, and logstash. Is there any command which no need to unistall all of them one by one but using only one single command ?
this is the package that i used in my source.list
deb https://artifacts.elastic.co/packages/6.x/apt stable main
gagantous
(225 rep)
Dec 16, 2017, 03:27 PM
• Last activity: Mar 14, 2022, 10:55 PM
1
votes
1
answers
575
views
How to make bash script invoking logstash return prompt
For various reasons, I am not running logstash (7.10.1) as a service, but rather invoking it on-demand, in a bash script: /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/my_ls.conf & echo "" echo "#########################################################" read -p "Press enter to continue "...
For various reasons, I am not running logstash (7.10.1) as a service, but rather invoking it on-demand, in a bash script:
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/my_ls.conf &
echo ""
echo "#########################################################"
read -p "Press enter to continue "
It works well, but when finishing to create an elasticsearch index successfully, it always pauses with:
[INFO ] [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>1.3}
[INFO ] [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] [[main]1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
In order to make it return to the CLI prompt, I need to run from another SSH terminal the following:
pkill -f logstash
This is of course inconvenient and I am looking for a way to make the bash script prompt with "Press any key to exit".
My problem is that the statements after the logstash invocation (with the
&
appended), including the prompt read -p "Press enter to continue"
are displayed before logstash actually starts doing its work and the bash script never exits to the CLI prompt.
What is the proper way to make the bash script prompt with "Press any key to exit" when logstash finishes the index creation?
datsb
(323 rep)
Jan 4, 2022, 04:00 PM
• Last activity: Jan 5, 2022, 02:49 PM
0
votes
1
answers
4702
views
how to enable debug logs in stdout for logstash?
I'm struggling as newbie in logstash, below is some info of my env Logstash Version: logstash-7.16.2-1.x86_64 java Version: openjdk version "11.0.13" 2021-10-19 LTS # Logstash Conf input { stdin { } } output { stdout { debug => true } } I run logstash with below command, I get error: # /usr/share/lo...
I'm struggling as newbie in logstash, below is some info of my env
Logstash Version: logstash-7.16.2-1.x86_64
java Version: openjdk version "11.0.13" 2021-10-19 LTS
# Logstash Conf
input {
stdin { }
}
output {
stdout {
debug => true
}
}
I run logstash with below command, I get error:
# /usr/share/logstash/bin/logstash -f simple.conf
[ERROR] 2022-01-03 08:26:36.742 [Converge PipelineAction::Create] stdout - Unknown setting 'debug' for stdout
how do I enable debug logs in logstash, outputting events, as well as message section?
Sollosa
(1993 rep)
Jan 3, 2022, 01:30 PM
• Last activity: Jan 3, 2022, 02:27 PM
0
votes
1
answers
1006
views
prevent inode reuse
We are using Logstash to ingest our logs and we are facing some issues due inodes being reused. We tried all possible options on Logstash side so we are exploring the OS side. As far as I can see, if I create a file, drop it and later on I create a new one, most of the time it will get the same inod...
We are using Logstash to ingest our logs and we are facing some issues due inodes being reused. We tried all possible options on Logstash side so we are exploring the OS side.
As far as I can see, if I create a file, drop it and later on I create a new one, most of the time it will get the same inode
[root@XXXX~]# touch a.txt
[root@XXXX~]# stat -c%i a.txt
671092802
[root@XXXX~]# rm a.txt
rm: remove regular empty file ‘a.txt’? y
[root@XXXX~]# touch a.txt
[root@XXXX~]# stat -c%i a.txt
671092802
[root@XXXX~]# rm a.txt
rm: remove regular empty file ‘a.txt’? y
[root@XXXX~]# touch b.txt
[root@XXXX~]# stat -c%i b.txt
671092802
How can I prevent OS with XFS to reuse recently used inodes for new files?
Ideally, we would like to define a period of time between the file is deleted until the inode is being reused. The disk is big so we don't expect issue reaching inode limits.
Thanks
sickfear
(3 rep)
Dec 7, 2021, 11:40 AM
• Last activity: Dec 7, 2021, 01:14 PM
1
votes
2
answers
4883
views
logstash regex match in if condition
In logstash filtering, I have multiple tags setup based upon different error conditions and all the tags has a prefix, something like "abc:" In the output, I want to send email based upon just "abc:*" exists in tags. I haven't come across such condition reading the docs. Mostly it says: if "abc" in...
In logstash filtering, I have multiple tags setup based upon different error conditions and all the tags has a prefix, something like "abc:"
In the output, I want to send email based upon just "abc:*" exists in tags.
I haven't come across such condition reading the docs.
Mostly it says:
if "abc" in [tags] {
...
...
}
However I want to have the condition match any tag with "abc-*". Any ideas?
sudobash
(61 rep)
Sep 11, 2015, 06:23 PM
• Last activity: Oct 14, 2021, 11:54 PM
1
votes
0
answers
352
views
Nginx in UDP load balancing continue to send to an unreachable upstream server
I use Nginx to load balance traffic coming from udp syslog sources to logstash Configuration : user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; worker_rlimit_nofile 1000000; # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. include /usr/share/...
I use Nginx to load balance traffic coming from udp syslog sources to logstash
Configuration :
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
worker_rlimit_nofile 1000000;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
include /etc/nginx/conf.d/*.conf;
#UDP syslog load balancing
stream {
server {
listen 514 udp;
proxy_pass logstash_servers;
proxy_timeout 1s;
proxy_responses 0;
proxy_bind $remote_addr transparent;
}
upstream logstash_servers {
server 192.168.2.90:514 fail_timeout=10s;
server 192.168.2.95:514 fail_timeout=10s;
}
}
It works fine but if one of my upstream logstash server is down, Nginx does not take it into account and I receive the half of message on the remaining upstream logstash server.
How can I tell to Nginx to use only the remaining logstash server if one is down ?
Atreiide
(11 rep)
Sep 30, 2021, 09:39 AM
• Last activity: Sep 30, 2021, 09:57 AM
0
votes
0
answers
318
views
How to monitor a directory for changes across reboots?
I'm searching for a tool that will allow me to run a program for every change in a directory, but is guaranteed to run exactly once per change even across system reboots. For the purposes of this question, (inode, mtime, size) is sufficient to detect a "change". Does something like this already exis...
I'm searching for a tool that will allow me to run a program for every change in a directory, but is guaranteed to run exactly once per change even across system reboots. For the purposes of this question, (inode, mtime, size) is sufficient to detect a "change". Does something like this already exist?
Things that are insufficient:
- [stat](https://linux.die.net/man/1/stat) . I could hand-roll a script that stored a stat database, and run it from cron, but ideally I'd like something that tied into inotify while it was running to instantly detect changes.
- [inotifywait](https://linux.die.net/man/1/inotifywait) . Obviously, my inotify watches would be destroyed on reboot. I'm looking for something that could (for example) be run by cron every 5 minutes and still detect every change only once.
- [logstash](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html) . I know that logstash can monitor a directory of files in the manner that I'm looking for, but it's designed to handle log files. It may be possible to hack logstash to do what I want, but I couldn't figure out how.
Does anything like this exist, or am I out of luck?
Ryan Patterson
(101 rep)
Feb 9, 2021, 05:27 AM
0
votes
0
answers
120
views
hearthbeat can't run
I have ELK server, and sending Heartbeats to server. I added an monitor in monitors.d directory, added `setup.dashboards.enabled: true` line in heartbeat.yml file, but after restarting heartbeat service, service working 2-3 secand and then sending me this output message > Ping remote services for av...
I have ELK server, and sending Heartbeats to server. I added an monitor in monitors.d directory, added
Can anybody help my to solve this issue
setup.dashboards.enabled: true
line in heartbeat.yml file, but after restarting heartbeat service, service working 2-3 secand and then sending me this output message
> Ping remote services for availability and log results to Elasticsearch or send to Logstash
I deleted my added line setup.dashboards.enabled: true
and have not get that messages, hearthbeats worked, sent all beats to ELK server, but didn't send monitor's index

khachikyan97
(115 rep)
Sep 19, 2020, 10:54 AM
2
votes
1
answers
11713
views
what is GREEDYDATA in elasticsearch
reading the conf files of `logstash` i found in filter conf grok { match => { "message" => "Put\s*command\s*:\s+%{GREEDYDATA:command}" } } How does this filter work , i tried to search for `GREEDYDATA` but i couldn't understand
reading the conf files of
logstash
i found in filter conf
grok {
match => { "message" => "Put\s*command\s*:\s+%{GREEDYDATA:command}" }
}
How does this filter work , i tried to search for GREEDYDATA
but i couldn't understand
I'm V
(43 rep)
Oct 27, 2019, 08:48 PM
• Last activity: Oct 27, 2019, 10:05 PM
1
votes
1
answers
433
views
rsyslog error in logstash
I have installed ELK and tried to configure rsyslog server with logstash but i am getting lot of \ while executing `curl -XGET 'http://172.22.63.61:9200/logstash-*/_search?q=*&pretty'` Rsyslog servr rsyslog-8.24.0-38.el7.x86_64er config rsyslog config template(name="json_syslog" type="list") { const...
I have installed ELK and tried to configure rsyslog server with logstash but i am getting lot of \ while executing
curl -XGET 'http://172.22.63.61:9200/logstash-*/_search?q=*&pretty '
Rsyslog servr
rsyslog-8.24.0-38.el7.x86_64er config
rsyslog config
template(name="json_syslog"
type="list") {
constant(value="{")
constant(value="\"@timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"type\":\"syslog_json")
constant(value="\",\"tag\":\"") property(name="syslogtag" format="json")
constant(value="\",\"relayhost\":\"") property(name="fromhost")
constant(value="\",\"relayip\":\"") property(name="fromhost-ip")
constant(value="\",\"logsource\":\"") property(name="source")
constant(value="\",\"hostname\":\"") property(name="hostname" caseconversion="lower")
constant(value="\",\"program\":\"") property(name="programname")
constant(value="\",\"priority\":\"") property(name="pri")
constant(value="\",\"severity\":\"") property(name="syslogseverity")
constant(value="\",\"facility\":\"") property(name="syslogfacility")
constant(value="\",\"severity_label\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility_label\":\"") property(name="syslogfacility-text")
constant(value="\",\"message\":\"") property(name="rawmsg" format="json")
constant(value="\",\"end_msg\":\"")
constant(value="\"}\n")
}
*.* @@172.22.63.61:10514;json_syslog
logstash conf
input {
tcp {
host => "172.22.63.61"
port => 10514
codec => "json"
type => "rsyslog"
}
}
output {
if [type] == "rsyslog" {
elasticsearch {
hosts => [ "172.22.63.61:9200" ]
}
}
}
Curl command:
curl -XGET 'http://172.22.63.61:9200/logstash-*/_search?q=*&pretty '
"_index" : "logstash-2019.10.10-000001",
Results in:
"_type" : "_doc",
"_id" : "bo8mtW0BTtPUN--NaXNH",
"_score" : 1.0,
"_source" : {
"message" : "6,342][ERROR][logstash.codecs.json ] JSON parse error, original data now in message field {:error=>#, :data=>\\\"{\\\\\\\"@timestamp\\\\\\\":\\\\\\\"2019-10-10T06:10:56.305176-04:00\\\\\\\",\\\\\\\"@version\\\\\\\":\\\\\\\"1\\\\\\\",\\\\\\\"messages\\\\\\\":\\\\\\\"at [Source: (String)\\\\\\\\\\\\\\\"{\\\\\\\\\\\\\\\"@timestamp\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\"2019-10-10T06:10:56.300808-04:00\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"@version\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\"1\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"messages\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\"at [Source: (String)\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"@timestamp\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"2019-10-10T06:10:56.297552-04:00\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"@version\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"1\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"messages\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"2019-10-10T10:10:56.244345+00:00\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"@version\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"1\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"messages\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"Window manager warning: last_user_time (104853390) is greater than comparison timestamp (104853359). This most likely represents a buggy client sending inaccurate timestamps in messages such as _NET_ACTIVE_WINDOW. Trying to \\\\\\\\\\\\\\\"[truncated 3596 chars]; line: 1, column: 8193]>, :data=>\\\\\\\\\\\\\\\"{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"@timestamp\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"2019-10-10T06:10:56.300808-04:00\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"@version\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"1\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"messages\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"at [Source: (String)\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"@timestamp\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"2019-10-10T06:10:56.297552-04:00\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"@version\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"1\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"messages\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"2019-10-10T10:10:56.244345+00:00\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"@version\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"1\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"messages\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"Window manager warning: last_user_time (104853390) is greater than comparison timestamp (104853359). This most likely represents a buggy client sending inaccurate timestamps in messages such as _NET_ACTIVE_WINDOW. Trying to work around...\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"sysloghost\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"scnmgmt4\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"severity\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"warning\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"facility\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"user\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"porgramname\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"org.gnome.Shell.d\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"[truncated 1854 chars]; line: 1, column: 617]>, :data=>\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"@timestamp\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"2019-10-10T06:10:56.297552-04:00\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"@version\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"1\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"messages\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"2019-10-10T10:10:56.244345+00:00\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"@version\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"1\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"messages\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"Window manager warning: last_user_time (104853390) is greater than comparison timestamp (104853359). This most likely represents a buggy client sending inaccurate timestamps in messages such as _NET_ACTIVE_WINDOW. Trying to work around...\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"sysloghost\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"scnmgmt4\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"severity\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"warnin\n",
"@version" : "1",
"@timestamp" : "2019-10-10T10:10:56.315Z",
"port" : 42228,
"type" : "rsyslog",
"host" : "elkserver",
"tags" : [
"_jsonparsefailure"
MOBIN TM
(11 rep)
Oct 11, 2019, 10:53 AM
• Last activity: Oct 12, 2019, 10:11 AM
0
votes
1
answers
870
views
problem re-loading index from filebeat in elasticsearch
I am using the ELK stack (more ELG stack as I am using Grafana as the front end instead of kibana for personal reasons). I am using Filebeat to send the logs file to Logstash which are then stored in Elasticsearch and displayed through Grafana. I have used [this guide][1] for the setup. [1]: https:/...
I am using the ELK stack (more ELG stack as I am using Grafana as the front end instead of kibana for personal reasons). I am using Filebeat to send the logs file to Logstash which are then stored in Elasticsearch and displayed through Grafana. I have used this guide for the setup.
Now when I have added another path in the
filebeat.yml
configuration file and then deleted the previous indices in Elasticsearch and then loaded the template again through the following command,
filebeat setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
the index is not registered in elasticsearch and
curl -XGET http://127.0.0.1:9200/_cat/indices?v
shows no index in Elasticsearch. After checking the Filebeat logs I found the following error:
2018-06-05T10:08:32.228+0500 INFO instance/beat.go:468 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-06-05T10:08:32.229+0500 INFO instance/beat.go:475 Beat UUID: edf1a2c9-0d7d-4c8a-9823-30bf64b72a4f
2018-06-05T10:08:32.229+0500 INFO instance/beat.go:213 Setup Beat: filebeat; Version: 6.2.4
2018-06-05T10:08:32.229+0500 INFO elasticsearch/client.go:145 Elasticsearch url: http://localhost:9200
2018-06-05T10:08:32.230+0500 INFO pipeline/module.go:76 Beat name: vbras
2018-06-05T10:08:32.231+0500 INFO elasticsearch/client.go:145 Elasticsearch url: http://localhost:9200
2018-06-05T10:08:32.245+0500 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.4
**2018-06-05T10:08:32.249+0500 INFO template/load.go:73 Template already exists and will not be overwritten.
**
How can I resolve this issue?
detectiveahg
(1 rep)
Jun 5, 2018, 07:57 AM
• Last activity: May 13, 2019, 10:48 AM
0
votes
1
answers
92
views
Logstash output to nginx
I have 3 node and run cluster `graylog-server`. I use `nginx` for `load balancer` and on `nginx` open port (`12301`). i want to send `log JSON format` from `logstash` to this `nginx` then `nginx`, load balancer and send to `graylog-servers`. Which `output plugin` should I use in `logstash`? `logssta...
I have 3 node and run cluster
graylog-server
.
I use nginx
for load balancer
and on nginx
open port (12301
).
i want to send log JSON format
from logstash
to this nginx
then nginx
, load balancer and send to graylog-servers
. Which output plugin
should I use in logstash
?
logsstash
( input->tcp
)-( output-> nginx
port(12301) ).
pyramid13
(639 rep)
Feb 25, 2019, 10:08 AM
• Last activity: Apr 4, 2019, 04:55 PM
1
votes
1
answers
675
views
Logstash Unrecognized service Amazon Linux
I have been following this tutorial to install ELK stack in a remote server which runs on Amazon Linux. https://www.aytech.ca/blog/setup-elk-stack-amazon-linux/ I was able to install Elasticsearch and then to start it as a service.Then I installed logstash. However when I try to start the logstash s...
I have been following this tutorial to install ELK stack in a remote server which runs on Amazon Linux.
https://www.aytech.ca/blog/setup-elk-stack-amazon-linux/
I was able to install Elasticsearch and then to start it as a service.Then I installed logstash. However when I try to start the logstash service using this command,
service logstash status
the console returns this error.
logstash: unrecognized service
However, when I grep'd the logstash it gave this output. Means it is running,right?
Can someone provide a solution how to make this logstash run as a service?


Sandun
(115 rep)
Jan 18, 2019, 02:58 PM
• Last activity: Jan 25, 2019, 03:57 AM
1
votes
1
answers
255
views
logstash - take 2 - filter to send messages from IntelMQ/python/redis to ELK
Following up on the heels of this question, https://stackoverflow.com/questions/40768603/logstash-trying-to-make-sense-of-strings-passed-by-intelmq-in-elasticsearch I am trying to create a refine/create a filter to receive messages from logstash to kibana. Whilst the original requirements and answer...
Following up on the heels of this question, https://stackoverflow.com/questions/40768603/logstash-trying-to-make-sense-of-strings-passed-by-intelmq-in-elasticsearch I am trying to create a refine/create a filter to receive messages from logstash to kibana.
Whilst the original requirements and answer are almost to speck, some new bots added to IntelMQ now put spaces on the fields. Obviously, they break completely the filters, and worse yet create spurious new fields and date in Elastic Search.
I also have found out the solution in the referred thread does take well in account the beginning and end of strings.
The strings itself is similar to:
{u'feed': u'openbl', u'reported_source_ip': u'115.79.215.79', u'source_cymru_cc': u'VN', u'source_time': u'2016-06-25T11:15:14+00:00', u'feed_url': u'http://www.openbl.org/lists/date_all.txt ', u'taxonomy': u'Other', u'observation_time': u'2016-11-20T22:51:25', u'source_ip': u'115.79.215.79', u'source_registry': u'apnic', u'source_allocated': u'2008-07-17', u'source_bgp_prefix': u'115.79.192.0/19', u'type': u'blacklist', u'source_as_name': u'VIETEL-AS-AP Viettel Corporation, VN', u'source_asn':u'7552'}
What to do?
Rui F Ribeiro
(57882 rep)
Dec 1, 2016, 02:46 PM
• Last activity: Nov 2, 2018, 03:34 PM
0
votes
1
answers
537
views
Kibana- Want to split vertical bars based on my log fields
I have an application log file consists of following log levels: INFO, WARN, ERROR, DEBUG. Following filter criteria works fine in logstash config file: filter { grok { match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:log-level} \[%{DATA:thread_name}\]?-\[%{DATA:class}\] %{GREEDYDA...
I have an application log file consists of following log levels: INFO, WARN, ERROR, DEBUG. Following filter criteria works fine in logstash config file:
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:log-level} \[%{DATA:thread_name}\]?-\[%{DATA:class}\] %{GREEDYDATA:message}" }
}
date {
match => ["timestamp", "yyyy-MM-dd HH:mm:ss,SSS"]
target => "@timestamp"
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
I can see log-level field in "Discover" view of Kibana. However, I would like to visualize my app log as following: Split a vertical bar at a given moment to show how many ERROR logs, how many INFO logs, etc. are hit at a given moment.
When I go to "Visualize" tab and try to do "Add sub-buckets", "split bars" on X-axis, sub-aggregation="Terms"; I cannot see the field: "log-level" under the selectable "Field" options.
Could you please help me to split the bars based on log-level?
Thanks.
Murat
(335 rep)
Jan 9, 2017, 11:17 AM
• Last activity: Mar 21, 2018, 11:47 AM
2
votes
0
answers
2223
views
Rsyslog doesn't send logs to logstash server on port
I have two rsyslog-client and rsyslog-server. When I tried to "logger -p local1.notice SOMETHING" I don't see any information on server from one client, but from another I see logstash output. I have copied rsyslog.conf, it's identical on these two servers. What's wrong? When I test the connection v...
I have two rsyslog-client and rsyslog-server. When I tried to "logger -p local1.notice SOMETHING" I don't see any information on server from one client, but from another I see logstash output.
I have copied rsyslog.conf, it's identical on these two servers. What's wrong?
When I test the connection via telnet x.x.x.x 10514 from problem server, telnet connection is ok and I see tcpdump logs. And when I change tcp listen logstash port to 514 all works properly.
When I do logger -p on problem client I don't see anything in tcpdump -i any on it.
But if I do the same thing on "OK" client I see outside traffic via tcpdump to port 10514 (or something different).
When I change logstash port to 514 problem solved.
[root@centos-25 ~]# cat /etc/rsyslog.conf
# rsyslog configuration file
# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html
#### MODULES ####
# The imjournal module bellow is now used as a message source instead of imuxsock.
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imjournal # provides access to the systemd journal
#$ModLoad imklog # reads kernel messages (the same are read from journald)
#$ModLoad immark # provides --MARK-- message capability
# Provides UDP syslog reception
#$ModLoad imudp
#$UDPServerRun 514
# Provides TCP syslog reception
#$ModLoad imtcp
#$InputTCPServerRun 514
#### GLOBAL DIRECTIVES ####
# Where to place auxiliary files
$WorkDirectory /var/lib/rsyslog
# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
#$ActionFileEnableSync on
# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
# Turn off message reception via local log socket;
# local messages are retrieved through imjournal now.
$OmitLocalLogging on
# File to store the position in the journal
$IMJournalStateFile imjournal.state
#### RULES ####
$template DynFile,"/var/log/kibana/system-%HOSTNAME%.log"
# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.* /dev/console
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none ?DynFile
# The authpriv file has restricted access.
authpriv.* /var/log/secure
# Log all the mail messages in one place.
mail.* -/var/log/maillog
# Log cron stuff
cron.* /var/log/cron
# Everybody gets emergency messages
*.emerg :omusrmsg:*
# Save news errors of level crit and higher in a special file.
uucp,news.crit /var/log/spooler
# Save boot messages also to boot.log
local7.* /var/log/boot.log
# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList # run asynchronously
#$ActionResumeRetryCount -1 # infinite retries if host is down
#### Logstash #####
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
local1.notice @@192.168.253.26:10514;json-template
# ### end of the forwarding rule ###
$LocalHostName centos-25
# Include /var/log/dmesg for logging
$InputFileName /var/log/dmesg
$InputFileTag dmesg:
$InputFileStateFile stat-file1
$InputFileSeverity notice
$InputFileFacility local1
$InputRunFileMonitor
# end of include dmesg
This is test from OK client: logger -p local1.notice "test from problem centos-26"
Logstash output:
{
"@timestamp" => "2017-07-04T10:43:55.121Z",
"@version" => "1",
"message" => "Test From Centos-26",
"sysloghost" => "centos-26",
"severity" => "notice",
"facility" => "local1",
"programname" => "root",
"procid" => "-",
"host" => "192.168.253.26",
"port" => 40962,
"type" => "rsyslog"
}
and this is from problem node: logger -p local1.notice "test from problem centos-25"
Nothing in output
rsyslog.conf is identical on these clients.
This is from OK client:
# rsyslog configuration file
# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html
#### MODULES ####
# The imjournal module bellow is now used as a message source instead of imuxsock.
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imjournal # provides access to the systemd journal
#$ModLoad imklog # reads kernel messages (the same are read from journald)
#$ModLoad immark # provides --MARK-- message capability
# Provides UDP syslog reception
#$ModLoad imudp
#$UDPServerRun 514
# Provides TCP syslog reception
#$ModLoad imtcp
#$InputTCPServerRun 514
#### GLOBAL DIRECTIVES ####
# Where to place auxiliary files
$WorkDirectory /var/lib/rsyslog
# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
#$ActionFileEnableSync on
# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
# Turn off message reception via local log socket;
# local messages are retrieved through imjournal now.
$OmitLocalLogging on
# File to store the position in the journal
$IMJournalStateFile imjournal.state
#### RULES ####
$template DynFile,"/var/log/kibana/system-%HOSTNAME%.log"
# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.* /dev/console
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none ?DynFile
# The authpriv file has restricted access.
authpriv.* /var/log/secure
# Log all the mail messages in one place.
mail.* -/var/log/maillog
# Log cron stuff
cron.* /var/log/cron
# Everybody gets emergency messages
*.emerg :omusrmsg:*
# Save news errors of level crit and higher in a special file.
uucp,news.crit /var/log/spooler
# Save boot messages also to boot.log
local7.* /var/log/boot.log
# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList # run asynchronously
#$ActionResumeRetryCount -1 # infinite retries if host is down
#### Logstash #####
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
local1.notice @@192.168.253.26:10514;json-template
# ### end of the forwarding rule ###
$LocalHostName centos-26
# Include /var/log/dmesg for logging
$InputFileName /var/log/dmesg
$InputFileTag dmesg:
$InputFileStateFile stat-file1
$InputFileSeverity notice
$InputFileFacility local1
$InputRunFileMonitor
# end of include dmesg
Vladimir Fomin
(257 rep)
Jul 4, 2017, 10:31 AM
• Last activity: Mar 15, 2018, 08:08 AM
0
votes
2
answers
758
views
How to configure Simple Event Correlator (SEC) to send info about mail delivery failure
My log file contains the following 3 log entries: 2017-11-16 15:50:45 1eFLV7-0003so-Cd R=1eFLV7-0003sZ-4v U=Debian-exim P=local S=1853 T="Mail delivery failed: returning message to sender" from 2017-11-16 15:50:45 1eFLV7-0003so-Cd => admins@xxx.com R=dnslookup T=remote_smtp H=smtp-51.xxx.com [xxx.xx...
My log file contains the following 3 log entries:
2017-11-16 15:50:45 1eFLV7-0003so-Cd R=1eFLV7-0003sZ-4v U=Debian-exim P=local S=1853 T="Mail delivery failed: returning message to sender" from
2017-11-16 15:50:45 1eFLV7-0003so-Cd => admins@xxx.com R=dnslookup T=remote_smtp H=smtp-51.xxx.com [xxx.xx.xx.xx] X=TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128
2017-11-16 15:50:45 1eFLV7-0003so-Cd Completed
I want to have an email sent to me, when an entry "Mail delivery failed*admins@.xxx.com" appears in the log file.
How can I achieve this?
Maybe SEC - Simple Event Correlator can help me?
But the below configuration(pattern) does not working for me.
type=SingleWithThreshold
ptype=RegExp
pattern=Mail delivery failed: returning message to sender*admins@xxx.com
desc=Problem with mail admin@xxx.com
action=pipe '%s' /usr/bin/mail -s 'ERROR SEND MAIL' me@xxx.com
window=1
thresh=1
debek
(237 rep)
Dec 28, 2017, 08:56 AM
• Last activity: Jan 1, 2018, 02:58 PM
1
votes
0
answers
1112
views
logstash: Trying to extract substrings from path
I'm trying to extract substrings from my path field in my logstash config. The 'path' field looks like this: /storage/logs/deployment/servers/hostname.example.com/server.log Inside a filter section I have this: ruby { code => "event.set('log_filename', event.get('path').split('/').last)" } This work...
I'm trying to extract substrings from my path field in my logstash config.
The 'path' field looks like this:
/storage/logs/deployment/servers/hostname.example.com/server.log
Inside a filter section I have this:
ruby {
code => "event.set('log_filename',
event.get('path').split('/').last)"
}
This works fine. I get a new field called 'log_filename'
However I'm also interested in the server name (hostname.example.com)
So I tried this:
ruby {
code => "event.set('log_filename', event.get('path').split('/').[-1]) event.set('server_name', event.get('path').split('/').[-2])"
}
This does not work at all. I don't get any errors in my logstash log but no logstash data is seen.
I'm after the filename and the field before it, which represents the host it came from.
Is there something wrong with my syntax?
Aditya K
(2260 rep)
Jul 7, 2017, 11:44 AM
• Last activity: Jul 7, 2017, 11:47 AM
Showing page 1 of 20 total questions