Logstash shuts down during clustering - elasticsearch

Im having three 7.4.2 ELK nodes.I have configured clustering in first elasticsearch/logstash/kibana node and restarted ELK in that node. It wass successful. But once I have configured clustering in second ELK node, first node logstash stopped automatically with below error
An unexpected error occurred! {:error=>#<LogStash::Outputs::ElasticSearch::
HttpClient::Pool::HostUnreachableError: Could not reach host Manticore::SocketException:
Connection refused (Connection refused)>, :backtrace=>["/opt///logstash/vendor
/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/
elasticsearch/http_client/pool.rb:293:in perform_request_to_url'", "/opt/****/****/logstash /vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs /elasticsearch/http_client/pool.rb:278:in block in perform_request'", "/opt///
logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/
logstash/outputs/elasticsearch/http_client/pool.rb:373:in with_connection'", " /opt/****/****/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib /logstash/outputs/elasticsearch/http_client/pool.rb:277:in perform_request'",
"/opt///logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-
java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:285:in block in Pool'", "/opt/****/****/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0- java/lib/logstash/outputs/elasticsearch/http_client.rb:162:in get'", "/opt///logstash/
ndor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/
outputs/elasticsearch/http_client.rb:378:in get_xpack_info'", "/opt/****/****/logstash/ vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/ outputs/elasticsearch/ilm.rb:57:in ilm_ready?'", "/opt///logstash/vendor/bundle
/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/
ilm.rb:28:in ilm_in_use?'", "/opt/****/****/logstash/vendor/bundle/jruby/2.5.0/gems/ logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/common.rb:52:in block in setup_after_successful_connection'"]}
There is a FATAL error in logs as below
[2020-10-12T17:52:25,998][ERROR][logstash.outputs.elasticsearch][events] Failed to install template. {:message=>"Elasticsearch Unreachable: [http://...:9200/][Manticore::SocketException] Connection refused (Connection refused)", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :backtrace=>["logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:293:in perform_request_to_url'", "logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:278:in block in perform_request'", "logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:373:in with_connection'", logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:277:in perform_request'", "logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:285:in block in Pool'", "logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:162:in get'", "logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:378:in get_xpack_info'", "logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/ilm.rb:57:in ilm_ready?'", "logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/ilm.rb:28:in ilm_in_use?'", "logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/template_manager.rb:14:in install_template'",
logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/common.rb:130:in install_template'", "logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/common.rb:51:in block in setup_after_successful_connection'"]}
Any idea? Does it have something to do with ilm_enabled?
I have made clustering changes in elasticsearch as below.
cluster.name: "elasticsearch"
node.name: "node-2"
node.data: true
path.logs: "/var/opt/logs/elasticsearch/"
path.data: "/var/lib/elasticsearch"
network.host: "**.**.**.**"
http.port: 9200
discovery.seed_hosts:
- "**.**.**.**"
- "**.**.**.**"
cluster.initial_master_nodes:
- "node-1"
- "node-2"
And Im using elasticsearch output plugin , where I added clustering nodes as below.
elasticsearch
{
hosts => ["**.**.**.**:9200","**.**.**.**:9200"]
document_id => "%{authsid}"
index => "dashboard_write"
script => "ctx._source.loginCount= params.event.get('loginCount');
ctx._source.contractName= params.event.get('contractName');
ctx._source.userName= params.event.get('userName');
ctx._source.sessionID= params.event.get('sessionID');
ctx._source.eventID= params.event.get('eventID');"
doc_as_upsert => "true"
action => "update"
ilm_enabled => false
}
I have done above changes in first node elasticsearch.yml & logstash output.conf files and restarted first node. It came up successfully. Then I have done same changes in second node and restarted second node, then first node logstash went down automatically

In my point of view, your logstash configuration doesn't find the index template. You have ilm_settings in elasticsearch output plugin but you didn't specify the index template. So, logstash trying to find wrong end point. Thats why you get an error. Use similar like below,
elasticsearch {
hosts => ["localhost:9200"]
index => "index_name"
manage_template => true
template_overwrite => true
ilm_enabled => false
template_name => "template_name"
template => "path_to_template"
document_id => "document_id"
http_compression => true
}

Related

Deleted logs are not rewritten to Elasticsearch

I'm using Logstash to read log files and send to Elasticsearch. It works fine in a streaming mode, creating everyday a different index and writing logs in real time.
The problem is, yesterday at 3pm I occasionally deleted the index. It restored automatically and continued writing logs. However, I have lost the logs related to 12am - 3pm.
In order to rewrite the log from the beginning, I deleted the sincedb file, also added ignore_older => 0 in the Logstash configuration. After that, I deleted the index again. But it continues streaming, ignoring old data.
My current configuration of logstash:
input {
file {
path => ["/someDirectory/Logs/20221220-00001.log"]
start_position => "beginning"
tags => ["prod"]
ignore_older => 0
sincedb_path => "/dev/null"
type => "cowrie"
}
}
filter {
grok {
match => ["path", "/var/www/cap/cap-server/Logs/%{GREEDYDATA:index_name}" ]
}
}
output {
elasticsearch {
hosts => "IP:9200"
user => "elastic"
password => "xxxxxxxx"
index => "logstash-log-%{index_name}"
}
}
I would appreciate for any help.
I'm also attaching Elasticsearch configuration:
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
discovery.type: single-node
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
#action.destructive_requires_name: true
Note, that after all configuration changes, logstash and elasticsearch have been restared.

Logstash not performing task

I want some data in a postgresql database to be indexed to an elasticsearch index. To do so I decided to use Logstash.
I installed Logstash and JDBC.
I perform the following config:
jdbc {
jdbc_connection_string => "jdbc:postgresql://localhost:5432/product_data"
jdbc_user => "postgres"
jdbc_password => "<my_password>"
jdbc_driver_class => "org.postgresql.Driver"
schedule => "* * * * *" # cronjob schedule format (see "Helpful Links")
statement => "SELECT * FROM public.vendor_product" # the PG command for retrieving the documents IMPORTANT: no semicolon!
jdbc_paging_enabled => "true"
jdbc_page_size => "300"
}
}
output {
# used to output the values in the terminal (DEBUGGING)
# once everything is working, comment out this line
stdout { codec => "json" }
# used to output the values into elasticsearch
elasticsearch {
hosts => ["localhost:9200"]
index => "vendorproduct"
document_id => "document_%id"
doc_as_upsert => true # upserts documents (e.g. if the document does not exist, creates a new record)
}
}
As a test I sheduled this every minute. To run my test I did:
logstash.bat -f logstash_postgre_ES.conf --debug
On my console I get:
...
[2022-04-04T16:10:07,065][DEBUG][logstash.agent ] Starting puma
[2022-04-04T16:10:07,081][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9600}
[2022-04-04T16:10:07,190][DEBUG][logstash.api.service ] [api-service] start
[2022-04-04T16:10:07,834][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
C:/Users/Admin/Desktop/Elastic_search/logstash-6.8.23/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/cronline.rb:77: warning: constant ::Fixnum is deprecated
[2022-04-04T16:10:09,577][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2022-04-04T16:10:09,948][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2022-04-04T16:10:09,951][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2022-04-04T16:10:11,717][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0xaf6cee2 sleep>"}
[2022-04-04T16:10:14,595][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2022-04-04T16:10:14,960][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2022-04-04T16:10:14,961][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2022-04-04T16:10:16,742][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0xaf6cee2 sleep>"}
[2022-04-04T16:10:19,604][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
The last part gets printed every 2 seconds or so, for me it seems like its waiting to start though I let this run several minutes and it kept printing the same lines. In Kibana I check if my index got created but that wasn't the case.
The logstash-plain.log gives the same output as the console.
Why is there no index created and filled with the postgresqldata?

grok script for writing to logstash and rendering in Kibana

I am following filebeat->logstash->elasticsearch->kibana pipeline. filebeat successfully working and fetching the logs from the target file.
Logstash receiving the logs on input plugin and bypassing the filter plugin and sending over to the output plugin.
filebeat.yml
# ============================== Filebeat inputs ===============================
filebeat.inputs:
- type: log
enabled: true
paths:
- D:\serverslogs\ch5shdmtbuil100\TeamCity.BuildServer-logs\launcher.log
fields:
type: launcherlogs
- type: filestream
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
# =================================== Kibana ===================================
setup.kibana:
host: "localhost:5601"
# ------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
logstash.conf
input{
beats{
port => "5044"
}
}
filter {
if [fields][type] == "launcherlogs"{
grok {
match => {"message" =>%{YEAR:year}-%{MONTH:month}-%{MONTHDAY:day}%{DATA:loglevel}%{SPACE}-%{SPACE}%{DATA:class}%{SPACE}-%{GREEDYDATA:message}}
}
}
}
output{
elasticsearch{
hosts => ["http://localhost:9200"]
index => "poclogsindex"
}
}
I am able to send the logs on kibana but the grok debugger scripts is not rendering desired json on kibana.
The data json rendered on Kibana is not showing all the attributes passed in the script. Please advise.
Your grok pattern does not match the sample you gave in comment : several parts are missing (the brackets, the HH:mm:ss,SSS part and an additionnal space). Grok debuggers are your friends ;-)
Instead of :
%{YEAR:year}-%{MONTH:month}-%{MONTHDAY:day}%{DATA:loglevel}%{SPACE}-%{SPACE}%{DATA:class}%{SPACE}-%{GREEDYDATA:message}
Your pattern should be :
\[%{TIMESTAMP_ISO8601:timestamp}\] %{DATA:loglevel}%{SPACE}-%{SPACE}%{DATA:class}%{SPACE}-%{GREEDYDATA:message}
TIMESTAMP_ISO8601 matches this date/time.
Additionnally, I always doubled-quote the pattern, so the grok part would be :
grok {
match => {"message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] %{DATA:loglevel}%{SPACE}-%{SPACE}%{DATA:class}%{SPACE}-%{GREEDYDATA:message}"}
}

Create a new index in elasticsearch for each log file by date

Currently
I have completed the above task by using one log file and passes data with logstash to one index in elasticsearch :
yellow open logstash-2016.10.19 5 1 1000807 0 364.8mb 364.8mb
What I actually want to do
If i have the following logs files which are named according to Year,Month and Date
MyLog-2016-10-16.log
MyLog-2016-10-17.log
MyLog-2016-10-18.log
MyLog-2016-11-05.log
MyLog-2016-11-02.log
MyLog-2016-11-03.log
I would like to tell logstash to read by Year,Month and Date and create the following indexes :
yellow open MyLog-2016-10-16.log
yellow open MyLog-2016-10-17.log
yellow open MyLog-2016-10-18.log
yellow open MyLog-2016-11-05.log
yellow open MyLog-2016-11-02.log
yellow open MyLog-2016-11-03.log
Please could I have some guidance as to how do i need to go about doing this ?
Thanks You
It is also simple as that :
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "MyLog-%{+YYYY-MM-DD}.log"
}
}
If the lines in the file contain datetime info, you should be using the date{} filter to set #timestamp from that value. If you do this, you can use the output format that #Renaud provided, "MyLog-%{+YYYY.MM.dd}".
If the lines don't contain the datetime info, you can use the input's path for your index name, e.g. "%{path}". To get just the basename of the path:
mutate {
gsub => [ "path", ".*/", "" ]
}
wont this configuration in output section be sufficient for your purpose ??
output {
elasticsearch {
embedded => false
host => localhost
port => 9200
protocol => http
cluster => 'elasticsearch'
index => "syslog-%{+YYYY.MM.dd}"
}
}

For ELK,sometimes Logstash says “no such index”, how to set automatic create index in ES while “no such index”?

I found some pb with ELK, can anyone help me?
logstash 2.4.0
elasticsearch 2.4.0
3 elasticsearch instance for cluster
some time logstash warning:
“ "status"=>404, "error"=>{"type"=>"index_not_found_exception", "reason"=>"no such index", ...”,
and it doesn't work. curl -XGET ES indices, it truly not have the index.
when this happen, I must kill -9 logstash, and start it again, then it can create a index in ES and it works ok again.
So, my question is how to set automatic create index in ES while “no such index”?
My logstash conf is:
input {
tcp {
port => 10514
codec => "json"
}
}
output {
elasticsearch {
hosts => [ "9200.xxxxxx.com:9200" ]
index => "log001-%{+YYYY.MM.dd}"
}

Resources