Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
1 answers
4369 views
No valid OpenPGP data found - Elasticsearch wget
I am trying to install elasticsearch on Ubuntu 20.04, but I am getting the following error: ``` home@VirtualBox$ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - gpg: no valid OpenPGP data found. ``` I also tried the following with no luck: ``` VirtualBox:~$ wget -q...
I am trying to install elasticsearch on Ubuntu 20.04, but I am getting the following error:
home@VirtualBox$ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch  | sudo apt-key add -
gpg: no valid OpenPGP data found.
I also tried the following with no luck:
VirtualBox:~$ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch  -O mykey
VirtualBox:~$ sudo apt-key add <<< mykey
[sudo] password for VirtualBox: 
gpg: no valid OpenPGP data found.
I already updated Ubuntu packages:
sudo apt-get update
How could I solve this issue? Thanks in advance
John Barton (101 rep)
Jan 31, 2021, 06:43 AM • Last activity: May 10, 2025, 08:02 PM
0 votes
0 answers
25 views
Elastic inside docker not running
I am working to get elastic search up and running on a docker container. Basically my AWS instance has 16 GB of RAM & 10 GB of memory. Whenever I am trying to run the image, it gives me below error. My understanding is because due to vm.max_map_count param which is too low. Can someone suggest me wh...
I am working to get elastic search up and running on a docker container. Basically my AWS instance has 16 GB of RAM & 10 GB of memory. Whenever I am trying to run the image, it gives me below error. My understanding is because due to vm.max_map_count param which is too low. Can someone suggest me where can I find this parameter ? If it's inside the container how can I even edit this under /etc/sysctl.conf ? URL - https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_linux {"@timestamp":"2025-02-07T12:47:07.365Z", "log.level":"ERROR", "message":"node validation exception\n bootstrap checks failed. You must address the points described in the following lines before starting Elasticsearch. For more information see [https://www.elastic.co/guide/en/elasticsearch/reference/8.17/bootstrap-checks.html]\nbootstrap check failure of : max virtual memory areas vm.max_map_count is too low, increase to at least ; for more information see [https://www.elastic.co/guide/en/elasticsearch/reference/8.17/bootstrap-checks-max-map-count.html] ", "ecs.version": "1.2.0", "service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch", "elasticsearch.node.name":"592a6fd6de3b","elasticsearch.cluster.name":"docker-cluster"} ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/docker-cluster.log {"@timestamp":"2025-02-07T12:47:07.372Z", "log.level": "INFO", "message":"stopping ...", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch-shutdown","log.logger":"org.elasticsearch.node.Node","elasticsearch.node.name":"592a6fd6de3b","elasticsearch.cluster.name":"docker-cluster"} {"@timestamp":"2025-02-07T12:47:07.390Z", "log.level": "INFO", "message":"stopped", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch-shutdown","log.logger":"org.elasticsearch.node.Node","elasticsearch.node.name":"592a6fd6de3b","elasticsearch.cluster.name":"docker-cluster"} {"@timestamp":"2025-02-07T12:47:07.390Z", "log.level": "INFO", "message":"closing ...", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch-shutdown","log.logger":"org.elasticsearch.node.Node","elasticsearch.node.name":"592a6fd6de3b","elasticsearch.cluster.name":"docker-cluster"} {"@timestamp":"2025-02-07T12:47:07.401Z", "log.level": "INFO", "message":"closed", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch-shutdown","log.logger":"org.elasticsearch.node.Node","elasticsearch.node.name":"592a6fd6de3b","elasticsearch.cluster.name":"docker-cluster"} {"@timestamp":"2025-02-07T12:47:07.402Z", "log.level": "INFO", "message":"Native controller process has stopped - no new native processes can be started", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"ml-cpp-log-tail-thread","log.logger":"org.elasticsearch.xpack.ml.process.NativeController","elasticsearch.node.name":"592a6fd6de3b","elasticsearch.cluster.name":"docker-cluster"} ERROR: Elasticsearch died while starting up, with exit code 78 Thanks, Piyush
Piyush Nikhade (11 rep)
Feb 7, 2025, 02:09 PM • Last activity: Feb 7, 2025, 02:14 PM
0 votes
0 answers
32 views
Cannot change ownership of elasticsearch directory in colab
I was trying to install elastic search in Google Colab, and it seems that to run the service, you need to change the ownership to a user that is **not root**. I tried the `chown` command for this and then did an `ls -l` but don't see the ownership change. In fact, when I use the verbose option with...
I was trying to install elastic search in Google Colab, and it seems that to run the service, you need to change the ownership to a user that is **not root**. I tried the chown command for this and then did an ls -l but don't see the ownership change. In fact, when I use the verbose option with chown I can see the logs showing the permission change from root:root to daemon:daemon. !chown -R -v daemon:daemon elasticsearch-8.15.0/ Following is a snippet of the output illustrating the above changed ownership of 'elasticsearch-8.15.0/lib/elasticsearch-8.15.0.jar' from root:root to daemon:daemon changed ownership of 'elasticsearch-8.15.0/lib/elasticsearch-preallocate-8.15.0.jar' from root:root to daemon:daemon changed ownership of 'elasticsearch-8.15.0/lib/elasticsearch-x-content-8.15.0.jar' from root:root to daemon:daemon changed ownership of 'elasticsearch-8.15.0/lib/elasticsearch-lz4-8.15.0.jar' from root:root to daemon:daemon changed ownership of 'elasticsearch-8.15.0/lib/elasticsearch-cli-8.15.0.jar' from root:root to daemon:daemon changed ownership of 'elasticsearch-8.15.0/lib/elasticsearch-simdvec-8.15.0.jar' from root:root to daemon:daemon changed ownership of 'elasticsearch-8.15.0/lib/elasticsearch-native-8.15.0.jar' from root:root to daemon:daemon changed ownership of 'elasticsearch-8.15.0/lib/elasticsearch-core-8.15.0.jar' from root:root to daemon:daemon changed ownership of 'elasticsearch-8.15.0/lib/elasticsearch-logging-8.15.0.jar' from root:root to daemon:daemon changed ownership of 'elasticsearch-8.15.0/lib/elasticsearch-secure-sm-8.15.0.jar' from root:root to daemon:daemon changed ownership of 'elasticsearch-8.15.0/lib/elasticsearch-geo-8.15.0.jar' from root:root to daemon:daemon changed ownership of 'elasticsearch-8.15.0/lib/elasticsearch-plugin-analysis-api-8.15.0.jar' from root:root to daemon:daemon changed ownership of 'elasticsearch-8.15.0/lib/elasticsearch-plugin-api-8.15.0.jar' from root:root to daemon:daemon changed ownership of 'elasticsearch-8.15.0/lib/elasticsearch-grok-8.15.0.jar' from root:root to daemon:daemon changed ownership of 'elasticsearch-8.15.0/lib/elasticsearch-tdigest-8.15.0.jar' from root:root to daemon:daemon But doing an ls -l still shows root as the owner and group. !ls -l elasticsearch-8.15.0/ total 2273 drwx------ 2 root root 4096 Aug 11 13:44 bin drwx------ 3 root root 4096 Aug 11 13:45 config drwx------ 8 root root 4096 Aug 5 10:11 jdk drwx------ 6 root root 4096 Aug 5 10:11 lib -rwx------ 1 root root 3860 Aug 5 10:04 LICENSE.txt drwx------ 2 root root 4096 Aug 5 10:07 logs drwx------ 83 root root 4096 Aug 5 10:12 modules -rwx------ 1 root root 2285006 Aug 5 10:07 NOTICE.txt drwx------ 2 root root 4096 Aug 5 10:07 plugins -rwx------ 1 root root 9111 Aug 5 10:04 README.asciidoc PS - Since this is a colab question as well as a linux question, I wasn't sure whether to ask this here or on the data science forum(https://datascience.stackexchange.com/questions/tagged/colab) . Please let me know if this is the wrong place.
CS8867 (1 rep)
Aug 12, 2024, 03:10 PM
0 votes
0 answers
63 views
Kibana service on centos 7 server virtual machine failed to star with code=exited, status=1/FAILURE
I'm trying to setup Elasticsearch and Kibana on my local CentOS 7 virtual machine. I successfully installed ES and Kibana, elasticsearch is working and accessible at: 127.0.0.1:9200, but when the Kibana service is failed to start: Here's my kibana configuration [ **/etc/kibana/kibana.yml** ]: server...
I'm trying to setup Elasticsearch and Kibana on my local CentOS 7 virtual machine. I successfully installed ES and Kibana, elasticsearch is working and accessible at: 127.0.0.1:9200, but when the Kibana service is failed to start: Here's my kibana configuration [ **/etc/kibana/kibana.yml** ]: server.port: 5601 server.host: "0.0.0.0" elasticsearch.hosts: ["http://localhost:9200"] logging: appenders: file: type: file fileName: /var/log/kibana/kibana.log layout: type: json root: appenders: - default - file Here's what i get when run: **systemctl status kibana**: ● kibana.service - Kibana Loaded: loaded (/usr/lib/systemd/system/kibana.service; enabled; vendor preset: disabled) Active: failed (Result: start-limit) since Mon 2024-08-05 01:46:45 EDT; 18min ago Docs: https://www.elastic.co Process: 4345 ExecStart=/usr/share/kibana/bin/kibana (code=exited, status=1/FAILURE) Main PID: 4345 (code=exited, status=1/FAILURE) Aug 05 01:46:41 localhost.localdomain systemd: Unit kibana.service entered failed state. Aug 05 01:46:41 localhost.localdomain systemd: kibana.service failed. Aug 05 01:46:45 localhost.localdomain systemd: kibana.service holdoff time over, scheduling restart. Aug 05 01:46:45 localhost.localdomain systemd: Stopped Kibana. Aug 05 01:46:45 localhost.localdomain systemd: start request repeated too quickly for kibana.service Aug 05 01:46:45 localhost.localdomain systemd: Failed to start Kibana. Aug 05 01:46:45 localhost.localdomain systemd: Unit kibana.service entered failed state. Aug 05 01:46:45 localhost.localdomain systemd: kibana.service failed. Here's what i tried: 1. Change the host address to my local IP and also to 127.0.0.1, localhost 2. Created a custom log file at /usr/share/kibana and change its ownership to kibana user and give that log file in kibana configuration But no success, what might be wrong?
Abdul Rehman (143 rep)
Aug 5, 2024, 06:08 AM
0 votes
1 answers
94 views
How to set ip of service within a container? static ip in default network
With an ip-address of a containerized Elasticsearch from the output of `docker inspect` I succesfully called the [Elasticsearch][1] function in a Jupyter notebook running inside another container. Both are run by `docker compose up`. But on my end that [ip-address changes][2] at new run-times (e.g....
With an ip-address of a containerized Elasticsearch from the output of docker inspect I succesfully called the Elasticsearch function in a Jupyter notebook running inside another container. Both are run by docker compose up. But on my end that ip-address changes at new run-times (e.g. docker compose up/down). _Is it possible to set the ip?_ e.g. by adding something to the the docker-compose.yml or the Dockerfile of elasticsearch used in the build context? docker inspect *composed-container-name-here* "NetworkSettings": { "Bridge": "", ... "Ports": { "9200/tcp": null, "9300/tcp": null }, ... "IPAddress": "", "Networks": { "*composed-container-name-here*_default": { "IPAMConfig": null, ... "Gateway": "172.X.X.1", "IPAddress": "172.X.X.3", } I tried subnetting in docker-compose.yml, but it returned Error response from daemon: user specified IP address is supported only when connecting to networks with user configured subnets.
Johan (439 rep)
Mar 21, 2024, 05:14 PM • Last activity: Mar 21, 2024, 06:12 PM
0 votes
0 answers
43 views
elasticsearch cannot read certificate file - linux file permissions
I generated a certificate file with certbot. It is placed in `/etc/letsencrypt/...`. I created a group called `elk` where I added the `elasticsearch` user, and I recursively set it as the owning group for `/etc/letsencrypt` and recursively set the permissions to `770`. When I start elasticsearch via...
I generated a certificate file with certbot. It is placed in /etc/letsencrypt/.... I created a group called elk where I added the elasticsearch user, and I recursively set it as the owning group for /etc/letsencrypt and recursively set the permissions to 770. When I start elasticsearch via systemctl start elasticsearch.service, it is not able to read the file? `Caused by: java.security.AccessControlException: access denied ("java.io.FilePermission" "/etc/letsencrypt/live//fullchain.pem" "read") ` Why is that? What strategy would you recommend to be able to use the same certificate for elasticsearch and kibana?
Vivere (203 rep)
Mar 19, 2024, 05:37 PM
0 votes
1 answers
388 views
ElasticSearch install Raspberry pi
I have installed the latest version on elasticsearch on my raspberry pi by following (**I am not sure if elasticsearch is even running correctly** ) https://www.elastic.co/guide/en/elasticsearch/reference/current/deb.html When I do curl it asks me for password [![enter image description here][1]][1]...
I have installed the latest version on elasticsearch on my raspberry pi by following (**I am not sure if elasticsearch is even running correctly** ) https://www.elastic.co/guide/en/elasticsearch/reference/current/deb.html When I do curl it asks me for password enter image description here Here is service status of my elasticsearch enter image description here I even disable elasticsearch security in yaml file .
#http.enabled: false

xpack.security.enabled: false
xpack.security.http.ssl.enabled: false
xpack.security.transport.ssl.enabled: false
When I did restart of elasticsearch it still asks me for password . IS there a way to disable authentication where I can access elasticsearch with curl directly ? Below is my elasticsearch service status enter image description here Below is process enter image description here
arpit joshi (445 rep)
Nov 5, 2023, 03:01 AM • Last activity: Nov 8, 2023, 05:05 PM
0 votes
0 answers
184 views
run put queries on Elasticsearch host from ansible host
When I run any GET queries, it runs fine. for instance - name: run curl query on ES host uri: url: "http://localhost:9200" method: GET return_content: yes url_username: some_elastic_user url_password: elastic_pass register: response - debug: var: response.content RESPONSE: # ansible-playbook -i inv....
When I run any GET queries, it runs fine. for instance - name: run curl query on ES host uri: url: "http://localhost:9200" method: GET return_content: yes url_username: some_elastic_user url_password: elastic_pass register: response - debug: var: response.content RESPONSE: # ansible-playbook -i inv.txt getquery.yml PLAY [es] ******************************************************************************************************************************************** TASK [esquery : run curl query on ES host] ***************************************************************************************************** ok: [es1] TASK [esquery : debug] ************************************************************************************************************************* ok: [es1] => { "response.content": { "cluster_name": "elastic", "cluster_uuid": "evEg5b8aQiW-ewNdbYG5-A", "name": "es1", "tagline": "You Know, for Search", "version": { "build_date": "2020-06-14T19:35:50.234439Z", "build_flavor": "default", "build_hash": "757314695644ea9a1dc2fecd26d1a43856725e65", "build_snapshot": false, "build_type": "tar", "lucene_version": "8.5.1", "minimum_index_compatibility_version": "6.0.0-beta1", "minimum_wire_compatibility_version": "6.8.0", "number": "7.8.0" } } } However, when I run PUT query like below, it gives ERROR: "response.content": "VARIABLE IS NOT DEFINED!" here's playbook - name: set elasticsearch index settings uri: url: "http://localhost:9200/*/_settings" method: PUT headers: Content-Type: "application/json" body_format: json body: index: auto_expand_replicas: "0-all" url_username: some_elastic_user url_password: elastic_pass register: response - debug: var: response.content Here's error # ansible-playbook -i inv.txt putquery.yml PLAY [es] ******************************************************************************************************************************************** TASK [esquery : set elasticsearch index settings] ********************************************************************************************** ok: [es1] TASK [esquery : debug] ************************************************************************************************************************* ok: [es1] => { "response.content": "VARIABLE IS NOT DEFINED!" } not sure what variable it is refering to, and not sure whether query ran or not from output.
Sollosa (1993 rep)
Aug 4, 2023, 05:54 AM
0 votes
1 answers
93 views
Capture network interface device name with Packetbeat
With Packetbeat on Linux, the `packetbeat.interfaces.device: any` configuration captures all messages sent or received by the server where Packetbeat is installed. I want to distinguish the messages captured by the interface over which they were exchanged e.g. which messages were exchanged over inte...
With Packetbeat on Linux, the packetbeat.interfaces.device: any configuration captures all messages sent or received by the server where Packetbeat is installed. I want to distinguish the messages captured by the interface over which they were exchanged e.g. which messages were exchanged over interface eth0 and which ones were exchanged over wlan0. From the logs, I couldn't find a field with this information. Is there a way to include the interface device name with Packetbeat logs using some configuration? If not, are there any alternative ways that I could go about capturing the same with the Packetbeat logs?
Tabish Mir (133 rep)
Jul 20, 2023, 10:07 PM • Last activity: Jul 20, 2023, 11:28 PM
0 votes
1 answers
7218 views
keytool error: java.io.IOException: Invalid keystore format
I have a 3-node ELK stack (Elasticsearch v7.17). After a reboot, the Kibana web interface reports an error "Kibana server is not ready yet". The SSL certs were expired, so I re-created them (for the ELK CA, all 3 nodes, Kibana, and Logstash). However, the error persists, and `/var/log/kibana/kibana....
I have a 3-node ELK stack (Elasticsearch v7.17). After a reboot, the Kibana web interface reports an error "Kibana server is not ready yet". The SSL certs were expired, so I re-created them (for the ELK CA, all 3 nodes, Kibana, and Logstash). However, the error persists, and /var/log/kibana/kibana.log reports an error {"type":"log","@timestamp":"2023-03-29T17:19:39+02:00","tags":["error","elasticsearch-service"],"pid":8271,"message":"Unable to retrieve version information from Elasticsearch nodes. security_exception: [security_exception] Reason: unable to authenticate user [kibana] for REST request [/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip]"} The command /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive -v results in this output: Running with configuration path: /etc/elasticsearch Testing if bootstrap password is valid for http://10.0.0.1:9200/_security/_authenticate?pretty { "username" : "elastic", "roles" : [ "superuser" ], "full_name" : null, "email" : null, "metadata" : { "_reserved" : true }, "enabled" : true, "authentication_realm" : { "name" : "reserved", "type" : "reserved" }, "lookup_realm" : { "name" : "reserved", "type" : "reserved" }, "authentication_type" : "realm" } Checking cluster health: http://10.0.0.1:9200/_cluster/health?pretty { "error" : { "root_cause" : [ { "type" : "master_not_discovered_exception", "reason" : null } ], "type" : "master_not_discovered_exception", "reason" : null }, "status" : 503 } Failed to determine the health of the cluster running at http://10.0.0.1:9200 Unexpected response code from calling GET http://10.0.0.1:9200/_cluster/health?pretty Cause: master_not_discovered_exception The Elasticsearch log say: [2023-03-30T13:50:58,432][WARN ][o.e.d.PeerFinder ] [node1] address [10.0.0.2:9300], node [null], requesting [false] connection failed: [][10.0.0.2:9300] general node connection failure: handshake failed because connection reset [2023-03-30T13:50:58,432][WARN ][o.e.t.TcpTransport ] [node1] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.0.0.1:60126, remoteAddress=node2.example.org/10.0.0.2:9300, profile=default}], closing connection io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors No password was changed. The problem appears to be with the new SSL certificates. Therefore, I have created a new keystore via the command /usr/share/elasticsearch/bin/elasticsearch-keystore create and I'm trying to add the CA certificate (and then others) to it: keytool -importcert -trustcacerts -noprompt -keystore /etc/elasticsearch/elasticsearch.keystore -file /etc/elasticsearch/certs/ca.crt However, I get the following error: keytool error: java.io.IOException: Invalid keystore format I have converted the CA cert into PKCS12 and tried to import it in such format (ca.p12), since the keystore is defined as of type PKCS12 in my config, but I get the same error. What's wrong? Excerpts of the /etc/elasticsearch/elasticsearch.yml file: xpack.security.transport.ssl.keystore.path: elasticsearch.keystore xpack.security.transport.ssl.keystore.type: PKCS12 xpack.security.transport.ssl.truststore.path: elasticsearch.keystore xpack.security.transport.ssl.truststore.type: PKCS12 xpack.security.transport.ssl.verification_mode: certificate
dr_ (32068 rep)
Mar 30, 2023, 08:21 AM • Last activity: Mar 31, 2023, 03:20 PM
0 votes
2 answers
70 views
bash: filter except the latest n records
I'm creating a small script that will delete indexes on an Elasticsearch cluster to prevent it for fill up all the storage with logstash data. I have a list of records, and I would like to keep the latest n records (for example 7) and delete all the others. I can get the list of the indexes with cur...
I'm creating a small script that will delete indexes on an Elasticsearch cluster to prevent it for fill up all the storage with logstash data. I have a list of records, and I would like to keep the latest n records (for example 7) and delete all the others. I can get the list of the indexes with curl:
drakaris:~/ # curl -sL localhost:9200/_cat/indices/logstash-* | awk '{print $3;}' | sort
logstash-2022.12.30
logstash-2022.12.31
logstash-2023.01.01
logstash-2023.01.02
logstash-2023.01.03
logstash-2023.01.04
logstash-2023.01.05
logstash-2023.01.06
logstash-2023.01.07
logstash-2023.01.08
logstash-2023.01.09
In my script I would like to keep only the latest 7th indexes and delete all the others (logstash-2022.12.30, logstash-2022.12.31. logstash-2023.01.01, logstash-2023.01.02) using "curl -XDELETE localhost:9200/index". How can I get these records from an array like that in bash? Thanks --- [EDIT] I solved in this way, just in case someone find it useful
RETENTION=7
nbk=$(curl -sL localhost:9200/_cat/indices/logstash-* | awk '{print $3;}' | wc -l)
if [ $nbk -gt $RETENTION ]; then
    echo -e "======== Delete obsolete indexes (retention: $RETENTION)"
    let ntodel=$nbk-$RETENTION
    for efile in $(curl -sL localhost:9200/_cat/indices/logstash-* | awk '{print $3;}' | sort -r | /usr/bin/tail -$ntodel); do
        curl -XDELETE localhost:9200/$efile
        sleep 10
    done
fi
Tasslehoff Burrfoot (1 rep)
Jan 9, 2023, 02:58 PM • Last activity: Jan 10, 2023, 09:27 AM
0 votes
0 answers
1083 views
Unable to send logs from rsyslog to logstash and elasticsearch
I am using ubuntu and I installed the ELK stack version 8.5 on the same machine. I did the necessary configurations for each of the services(logstash, elasticsearch, kibana) and I equally configured rsyslog to send logs to logstash(defining an index to be created each day) and from logstash to elast...
I am using ubuntu and I installed the ELK stack version 8.5 on the same machine. I did the necessary configurations for each of the services(logstash, elasticsearch, kibana) and I equally configured rsyslog to send logs to logstash(defining an index to be created each day) and from logstash to elasticsearch. The issue is that I can't see any log in elasticsearch when rsyslog is at input in logstash meanwhile when I use the file input with the file's path, it works(I can see the index in elasticsearch and kibana too) but I also realised it doesn't show for some files. So it works for some files and it doesn't work for others. What can be the issue then? rsyslog.conf file
#################
#### MODULES ####
#################

module(load="imuxsock") # provides support for local system logging
#module(load="immark")  # provides --MARK-- message capability

# provides UDP syslog reception
module(load="imudp")
input(type="imudp" port="514")

# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="514")

# provides kernel logging support and enable non-kernel klog messages
module(load="imklog" permitnonkernelfacility="on")
Configuration files in the /etc/rsyslog.d directory
01-json-template.conf file

    template(name="json-template"
  type="list") {
    constant(value="{")
      constant(value="\"@timestamp\":\"")     property(name="timereported" dateFormat="rfc3339")
      constant(value="\",\"@version\":\"1")
      constant(value="\",\"message\":\"")     property(name="msg" format="json")
      constant(value="\",\"sysloghost\":\"")  property(name="hostname")
      constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
      constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
      constant(value="\",\"programname\":\"") property(name="programname")
      constant(value="\",\"procid\":\"")      property(name="procid")
    constant(value="\"}\n")
}
50-default.conf file
#  Default rules for rsyslog.
#
#                       For more information see rsyslog.conf(5) and /etc/rsyslog.conf

*.*                                                     @localhost:514

#
# First some standard log files.  Log by facility.
#
auth,authpriv.*                 /var/log/auth.log
*.*;auth,authpriv.none          -/var/log/syslog
#cron.*                         /var/log/cron.log
#daemon.*                       -/var/log/daemon.log
kern.*                          -/var/log/kern.log
#lpr.*                          -/var/log/lpr.log
mail.*                          -/var/log/mail.log
#user.*                         -/var/log/user.log

#
# Logging for the mail system.  Split it up so that
# it is easy to write scripts to parse these files.
#
#mail.info                      -/var/log/mail.info
#mail.warn                      -/var/log/mail.warn
mail.err                        /var/log/mail.err
60-output.conf file
# This line sends all lines to defined IP address at port 10514,
# using the "json-template" format template

*.*                                             @localhost:10514;json-template
logstash configuration file for rsyslog
input {
  udp {
    host => "localhost"
    port => 10514
    codec => "json"
    type => "rsyslog"
  }
}


    output {
    elasticsearch {
      hosts => ["localhost:9200"]
      index => "rsyslog-%{+YYYY.MM.dd}"
  }
}
Ngouaba Rosalie (31 rep)
Nov 29, 2022, 09:44 AM • Last activity: Nov 29, 2022, 02:12 PM
2 votes
0 answers
547 views
Is it safe to change read ahead setting on a live server
After going through Elasticsearch's documentation I realised that [the recommended read ahead value is 128KiB](https://www.elastic.co/guide/en/elasticsearch/reference/7.17/tune-for-search-speed.html#_avoid_page_cache_thrashing_by_using_modest_readahead_values_on_linux) while I am currently using 256...
After going through Elasticsearch's documentation I realised that [the recommended read ahead value is 128KiB](https://www.elastic.co/guide/en/elasticsearch/reference/7.17/tune-for-search-speed.html#_avoid_page_cache_thrashing_by_using_modest_readahead_values_on_linux) while I am currently using 256KiB on a live server. It is only indexing data and not serving any production traffic at the moment. I am not sure if changing it would result in an increase in performance but I was planning on testing that after. As far as I understand it should be safe to do so without corrupting any data but I can't find any info confirming that. Here is the configuration for the data drive (/dev/nvme0n1) that I got after running lsblk -o NAME,RA,MOUNTPOINT,TYPE,SIZE
nvme0n1  256                        disk    1.8T
└─md2    256                        raid1   1.8T
  └─data 256 /var/lib/elasticsearch crypt   1.8T
This is how I would go about changing the values blockdev --setra 256 /dev/nvme0n1 blockdev --setra 256 /dev/md2 blockdev --setra 256 /dev/mapper/data
Nick Garlis (21 rep)
Nov 8, 2022, 08:03 AM • Last activity: Nov 8, 2022, 08:04 AM
0 votes
1 answers
60 views
I want remove repeated records and remove those lines in awk
I want check repeated records in column 2 and remove those lines in awk ``` create a delete a create b create c delete c create d delete f create f create g create h ``` Expected Output ``` create b create d create g create h ``` tried on awk using this command but getting other way, but not exact r...
I want check repeated records in column 2 and remove those lines in awk
create a
delete a
create b
create c
delete c
create d
delete f
create f
create g
create h
Expected Output
create b
create d
create g
create h
tried on awk using this command but getting other way, but not exact result note: AWK is not mandatory ``` awk -F" " '{ if( (++count[$2]==2) ) print }'
Nabob (15 rep)
Sep 18, 2022, 02:20 PM • Last activity: Sep 18, 2022, 03:01 PM
-1 votes
2 answers
34233 views
How do i completely uninstall ELK (Elasticsearch, Logstash, Kibana)?
I search on internet that we have to unistall each of the ELK part one by one like unistall stand-alone kibana, elastic search, and logstash. Is there any command which no need to unistall all of them one by one but using only one single command ? this is the package that i used in my source.list de...
I search on internet that we have to unistall each of the ELK part one by one like unistall stand-alone kibana, elastic search, and logstash. Is there any command which no need to unistall all of them one by one but using only one single command ? this is the package that i used in my source.list deb https://artifacts.elastic.co/packages/6.x/apt stable main
gagantous (225 rep)
Dec 16, 2017, 03:27 PM • Last activity: Mar 14, 2022, 10:55 PM
0 votes
1 answers
176 views
elasticsearch: On premise restore snapshot from aws s3
This could sound pretty straightforward. However, I've spent days looking in the web for a method where I can migrate snapshot from aws s3 to on premise elasticsearch cluster. All the docs I've found mentioning how to achieve this for es on cloud, where Kabana console is available. This [doc][1] for...
This could sound pretty straightforward. However, I've spent days looking in the web for a method where I can migrate snapshot from aws s3 to on premise elasticsearch cluster. All the docs I've found mentioning how to achieve this for es on cloud, where Kabana console is available. This doc for instance is an example. Can anyone advise me how can I achieve this? noting that: Snapshot taken, and a repo already exist within aws s3 bucket. 7 O.S: Centos 7
Eng7 (1691 rep)
Apr 30, 2021, 11:36 AM • Last activity: Jun 2, 2021, 11:15 PM
0 votes
1 answers
681 views
HTTPD Redirect Rule as Proxy is giving me file not found error. How do I proxy to an external url?
I have a reverse proxy to AWS Elasticsearch. I am having issues with using `RedirectRule`: no matter what I try, my URL is being interpreted as a file. SSLProxyEngine On ProxyRequests On ProxyPreserveHost On RewriteEngine On RewriteRule /test-api https://vpc-cls-elasticsearch-test-tmqu2s2mcftvsuqe.a...
I have a reverse proxy to AWS Elasticsearch. I am having issues with using RedirectRule: no matter what I try, my URL is being interpreted as a file. SSLProxyEngine On ProxyRequests On ProxyPreserveHost On RewriteEngine On RewriteRule /test-api https://vpc-cls-elasticsearch-test-tmqu2s2mcftvsuqe.amazonaws.com [P] ProxyPassReverse /test-api https://vpc-cls-elasticsearch-test-tmqu2s2mcftvsuqe.amazonaws.com Calling https://example.com/test-api always returns this error: The requested URL /cls-api was not found on this server How can I get this to work without the existence of an actual file on my server?
gerbdla (11 rep)
Mar 16, 2021, 03:59 PM • Last activity: Mar 17, 2021, 06:21 PM
0 votes
1 answers
799 views
Auditbeat exclude /usr/sbin/cron
I'll tried to exclude event from cron jobs running that can be found with the KQL request : auditd.summary.how :"/usr/sbin/cron" My host does not running SE Linux, so the rules i found (put bellow) does not work : -a never,user -F subj_type=crond_t -a exit,never -F subj_type=crond_t I'll try this :...
I'll tried to exclude event from cron jobs running that can be found with the KQL request : auditd.summary.how :"/usr/sbin/cron" My host does not running SE Linux, so the rules i found (put bellow) does not work : -a never,user -F subj_type=crond_t -a exit,never -F subj_type=crond_t I'll try this : -a never,user -F exe=/usr/sbin/cron Not working too. Thanks for help.
Inazo (101 rep)
Feb 22, 2021, 01:38 PM • Last activity: Feb 22, 2021, 03:14 PM
0 votes
0 answers
636 views
Curl argument list too long error in elastic search mapping and setting
``` mapping=(</index_automation/{product}/mapping/Mapping/"$tempmappingfile") settings=(</index_automation/{product}/mapping/Mapping/"$tempsettingsfile") dataraw="{{mapping},{settings}}" creds=$(</index_automation/obj/creds.txt) reindexfile="/index_automation/${product}/mapping/Mapping/reindex.txt"...
mapping=(
In the above codebase I am reading the mapping and settings for a new index creation. For --data-raw "$dataraw" I am getting
curl: argument list too long
error. Can anyone help me?
CodeCracker (1 rep)
Jan 5, 2021, 07:08 AM • Last activity: Jan 5, 2021, 08:42 AM
0 votes
1 answers
5465 views
Kibana service won't start
I am using Manjaro and installed elasticsearch and kibana with yay -S elasticsearch kibana Starting the elasticsearch service works well sudo systemctl start elasticsearch I've configured kibana with the basic settings in /etc/kibana/kibana.yml: server.port: 5601 server.host: "localhost" elasticsear...
I am using Manjaro and installed elasticsearch and kibana with yay -S elasticsearch kibana Starting the elasticsearch service works well sudo systemctl start elasticsearch I've configured kibana with the basic settings in /etc/kibana/kibana.yml: server.port: 5601 server.host: "localhost" elasticsearch.hosts: ["http://localhost:9200"] But running kibana always fails: ❯❯❯ systemctl status kibana ✘ 7 ● kibana.service - Kibana - dashboard for Elasticsearch Loaded: loaded (/usr/lib/systemd/system/kibana.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Fri 2020-11-13 12:10:13 CET; 5min ago Process: 1609 ExecStart=/usr/bin/node --max-old-space-size=512 /usr/share/kibana/src/cli --config=/etc/kibana/kibana.yml (code=exited, status=1/FAILURE) Main PID: 1609 (code=exited, status=1/FAILURE) Nov 13 12:10:13 Trinity systemd: kibana.service: Scheduled restart job, restart counter is at 5. Nov 13 12:10:13 Trinity systemd: Stopped Kibana - dashboard for Elasticsearch. Nov 13 12:10:13 Trinity systemd: kibana.service: Start request repeated too quickly. Nov 13 12:10:13 Trinity systemd: kibana.service: Failed with result 'exit-code'. Nov 13 12:10:13 Trinity systemd: Failed to start Kibana - dashboard for Elasticsearch. Maybe I am overseeing somethink. What should I do to start it properly? journal -u kibana Nov 13 12:10:10 Trinity systemd: Started Kibana - dashboard for Elasticsearch. Nov 13 12:10:10 Trinity node: Kibana does not support the current Node.js version v15.0.1. Please use Node.js v10.22.1. Nov 13 12:10:10 Trinity systemd: kibana.service: Main process exited, code=exited, status=1/FAILURE Nov 13 12:10:10 Trinity systemd: kibana.service: Failed with result 'exit-code'. Nov 13 12:10:11 Trinity systemd: kibana.service: Scheduled restart job, restart counter is at 1. Nov 13 12:10:11 Trinity systemd: Stopped Kibana - dashboard for Elasticsearch. Nov 13 12:10:11 Trinity systemd: Stopped Kibana - dashboard for Elasticsearch. Nov 13 12:10:11 Trinity systemd: Started Kibana - dashboard for Elasticsearch. Nov 13 12:10:11 Trinity node: Kibana does not support the current Node.js version v15.0.1. Please use Node.js v10.22.1. Nov 13 12:10:11 Trinity systemd: kibana.service: Main process exited, code=exited, status=1/FAILURE Nov 13 12:10:11 Trinity systemd: kibana.service: Failed with result 'exit-code'. Nov 13 12:10:11 Trinity systemd: kibana.service: Scheduled restart job, restart counter is at 2. Nov 13 12:10:11 Trinity systemd: Stopped Kibana - dashboard for Elasticsearch. Nov 13 12:10:11 Trinity systemd: Started Kibana - dashboard for Elasticsearch. Nov 13 12:10:11 Trinity node: Kibana does not support the current Node.js version v15.0.1. Please use Node.js v10.22.1. Nov 13 12:10:11 Trinity systemd: kibana.service: Main process exited, code=exited, status=1/FAILURE Nov 13 12:10:11 Trinity systemd: kibana.service: Failed with result 'exit-code'. Nov 13 12:10:12 Trinity systemd: kibana.service: Scheduled restart job, restart counter is at 3. Nov 13 12:10:12 Trinity systemd: Stopped Kibana - dashboard for Elasticsearch. Nov 13 12:10:12 Trinity systemd: Started Kibana - dashboard for Elasticsearch. Nov 13 12:10:12 Trinity node: Kibana does not support the current Node.js version v15.0.1. Please use Node.js v10.22.1. Nov 13 12:10:12 Trinity systemd: kibana.service: Main process exited, code=exited, status=1/FAILURE Nov 13 12:10:12 Trinity systemd: kibana.service: Failed with result 'exit-code'. Nov 13 12:10:12 Trinity systemd: kibana.service: Scheduled restart job, restart counter is at 4. Nov 13 12:10:12 Trinity systemd: Stopped Kibana - dashboard for Elasticsearch. Nov 13 12:10:12 Trinity systemd: Started Kibana - dashboard for Elasticsearch. Nov 13 12:10:12 Trinity node: Kibana does not support the current Node.js version v15.0.1. Please use Node.js v10.22.1. Nov 13 12:10:12 Trinity systemd: kibana.service: Main process exited, code=exited, status=1/FAILURE Nov 13 12:10:12 Trinity systemd: kibana.service: Failed with result 'exit-code'. Nov 13 12:10:13 Trinity systemd: kibana.service: Scheduled restart job, restart counter is at 5. Nov 13 12:10:13 Trinity systemd: Stopped Kibana - dashboard for Elasticsearch. Nov 13 12:10:13 Trinity systemd: kibana.service: Start request repeated too quickly. Nov 13 12:10:13 Trinity systemd: kibana.service: Failed with result 'exit-code'. Nov 13 12:10:13 Trinity systemd: Failed to start Kibana - dashboard for Elasticsearch.
betaros (103 rep)
Nov 13, 2020, 11:46 AM • Last activity: Nov 20, 2020, 01:18 PM
Showing page 1 of 20 total questions