Failing to install ELK - elasticsearch

I am trying to install ELK but I am getting below timed out error.

Check your log, It might be the situation that your computer's configuration is not good enough to run elk.[elasticsearch can take 1.2g memory].
use free -m df -h check the machine can support elk service.
In centos, logs locate on /var/log/elasticsearch.
You can check your connection with curl localhost:9200.
If your elasticsearch deploys in other machines, you should set the configuration as host: 0.0.0.0

Update elasticsearch configuration
Add below code to elastcisearch.yml
# vi /etc/elasticsearch/elastcisearch.yml
network.host: 0.0.0.0
If you have enabled authentication for Elasticsearch, update Logstash configuration file and provide user and password.
# vi /home/checkstyle.conf
output {
elasticsearch {
hosts => ['localhost:9200']
user => 'admin'
password => 'admin'
sniffing => true
manage_template => false
index => "autocheckstylelogsyncfilebeat"
}
}
Verify Elasticsearch is active
# curl -XGET 'localhost:9200'
If authentication is enabled
# curl -XGET '<user>:<pwd>#localhost:9200'

Related

Logstash error : Failed to publish events caused by: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer

I am using filebeat to push my logs to elasticsearch using logstash and the set up was working fine for me before. I am getting Failed to publish events error now.
filebeat | 2020-06-20T06:26:03.832969730Z 2020-06-20T06:26:03.832Z INFO log/harvester.go:254 Harvester started for file: /logs/app-service.log
filebeat | 2020-06-20T06:26:04.837664519Z 2020-06-20T06:26:04.837Z ERROR logstash/async.go:256 Failed to publish events caused by: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer
filebeat | 2020-06-20T06:26:05.970506599Z 2020-06-20T06:26:05.970Z ERROR pipeline/output.go:121 Failed to publish events: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer
filebeat | 2020-06-20T06:26:05.970749223Z 2020-06-20T06:26:05.970Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://xx.com:5044))
filebeat | 2020-06-20T06:26:05.972790871Z 2020-06-20T06:26:05.972Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://xx.com:5044)) established
Logstash pipeline
02-beats-input.conf
input {
beats {
port => 5044
}
}
10-syslog-filter.conf
filter {
json {
source => "message"
}
}
30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index-%{+YYYY.MM.dd}"
}
}
Filebeat configuration
Sharing my filebeat config at /usr/share/filebeat/filebeat.yml
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /logs/*
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["xx.com:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
When I do telnet xx.xx 5044, this is the what I see in terminal
Trying X.X.X.X...
Connected to xx.xx.
Escape character is '^]'
I had the same problem. Here some steps, which could help you to find the core of your problem.
Firstly I tested such way: filebeat (localhost) -> logstash (localhost) -> elastic -> kibana. Each service is on the same machine.
My /etc/logstash/conf.d/config.conf:
input {
beats {
port => 5044
ssl => false
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}
Here, I specially disabled ssl (in my case it was a main reason of the issue, even when certificates were correct, magic).
After that don't forget to restart logstash and test with sudo filebeat -e command.
If everything is ok, you wouldn't see 'connection reset by peer' error
I had the same problem. Starting filebeat as a sudo user worked for me.
sudo ./filebeat -e
I have made some changes to input plugin config, as specifying ssl => false but did not worked without starting filebeat as a sudo privileged user or as root.
In order to start filebeat as a sudo user, filebeat.yml file must be owned by root. Change whole filebeat folder permission to a sudo privileged user by using sudo chown -R sime_sudo_user:some_group filebeat-7.15.0-linux-x86_64/ and then chown root filebeat.yml will change the permission of file.

Getting 401 Authorization Required from client Filebeat in ELK ElasticSearch Logstash Kibana)

I'm trying to setup my first ELK environment on RHEL7 using this guide,
I installed all required components (Nginx,logstash,kibana,elasticsearch),
I also installed filebeat on my client machine that I'm trying to pull the logs from, But when checking the installation I get 401:
[root#myd-vm666 beats-dashboards-1.1.0]# curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
<html>
<head><title>401 Authorization Required</title></head>
<body bgcolor="white">
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx/1.10.2</center>
</body>
</html>
in my filebeat configuration I stated the logstash host and the certificate location as follows:
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["16.XX.XXX.XXX:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["16.XX.XXX.XXX:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
tls:
# List of root certificates for HTTPS server verifications
certificate_authorities: "/etc/pki/tls/certs/logstash-forwarder.crt"
I verified that The logstash-forwarder.crt is in the right place.
And on my server, I have this configuration, /etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
What am I missing? is there another key/certificate I need to place on the client?
If you are approaching AWS Elasticsearch with username/password security as
then and version of both are compatible then
In Aws, while configuring your Elasticsearch service configure it for whitelisting of IP instead of Master User.
or
Configure FileBeat–> Logstash–>Elasticsearch with master username/password also it will work.
Reference: https://learningsubway.com/filebeat-401-unauthorized-error-with-aws-elasticsearch/

logstash Authentication error with shield

I'm getting the following error while trying to output data to elasticsearch from logstash:
Failed to install template: [401]
{"error":"AuthenticationException[unable to authenticate user
[es_admin] for REST request [/_template/logstash]]","status":401}
{:level=>:error}
I have the configuration like this in logstash:
if [type]=="signup"{
elasticsearch {
protocol => "http"
user =>"*****"
password =>"*******"
document_type => "signup"
host => "localhost"
index => "signups"
}
}
I have tried adding user with following commands:
esusers useradd <username> -p <password> -r logstash
I also tried giving role admin but logstash not working for admin user also.
The localhost:9200 is asking for the password and after entering the password it works but the logstash is giving an error.
I also had similar issue. There is a known issue with elasticsearch if the password has an "#" symbol, this issue can happen. See below link:
https://github.com/logstash-plugins/logstash-output-elasticsearch/issues/232
Also some documentation on elasticsearch has instructions to include "shield" configuration in elasticsearch.yml, but if you have only one shield realm, this is not needed. I dont have shield configuration in elasticsearch.yml
I see that you tried with both logstash and admin user but failed.
To try with an admin privileged user:
Please make sure your /etc/elasticsearch/shield/roles.yml has below content for admin role:
# All cluster rights
# All operations on all indices
admin:
cluster: all
indices:
'*':
privileges: all
Then test something like below:
curl -u es_admin:es_admin_password localhost:9200/_cat/health
To make the logstash role user to work, logstash role need to be tweaked at roles.yml. I configured logstash to use admin privileged user to write to elasticsearch. I hope this would help.

Kibana and Elasticsearch error

I want to access to Kibana by http://IP:80.
Nevertheless when I visit the pageI obtain these errors:
Upgrade Required Your version of Elasticsearch is too old. Kibana
requires Elasticsearch 0.90.9 or above.
and
Error Could not reach http://localhost:80/_nodes. If you are using a
proxy, ensure it is configured correctly
I have been looking up these problems on the internet and I have included these lines without success...
http.cors.enabled: true
http.cors.allow-origin: http://localhost:80
My Elasticsearch version is in fact 0.90.9.
What could I do?
please help me
According to my scenario Logstash using node protocol by default. if you apply command :
curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
if you are getting "number_of_nodes" : 2, means logstash using node protocol and becoming part of cluster .so kibana taking it other node that was in older version of elasticsearch.
solution :
put protocol => transport in logstash config file for shipping to ES.
like,
input { }
output {
elasticsearch {
action => ... # string (optional), default: "index"
embedded_http_port => ... # string (optional), default: "9200-9300"
index => ... # string (optional), default: "logstash-%{+YYYY.MM.dd}"
node_name => ... # string (optional)
port => ... # string (optional)
protocol => ... # string, one of ["node", "transport", "http"]
}
if you want to access on port 80 than you have to do proxy . otherwise kibana listen on 5601 by default. if you still facing same issue then use latest version of logstash + kibana +elasticsearch .
Download advanced version of elasticsearch as the version you are using is not compatible with Kibana. Try using latest elasticsearch version.

Logstash wont talk to Elastic Search

I have Elastic Search 1.3.2 via ELMA. The ELMA setup places ES REST API behind an Apache reverse proxy with SSL and basic auth.
On a separate host, I am trying to setup Logstash 1.4.2 to forward some information over to ES. The output part of my LS is as follows:
output {
stdout { codec => rubydebug }
elasticsearch {
host => "192.168.248.4"
}
This produces the following error:
log4j, [2014-09-25T01:40:02.082] WARN: org.elasticsearch.discovery: [logstash-ubuntu-jboss-39160-4018] waited for 30s and no initial state was set by the discovery
I then tried setting the protocol to HTTP as follows:
elasticsearch {
host => "192.168.248.4"
protocol => "http"
}
This produces a connection refused error:
Faraday::ConnectionFailed: Connection refused - Connection refused
I have then tried setting the port to 9200 (which gives connection refused error) and 9300 which gives:
Faraday::ConnectionFailed: End of file reached
Any ideas on how I can get logstash talking to my ES?
The way to inform logstash to set output in ES is :
elasticsearch {
protocol => "http"
host => "EShostname:EsportNo"
}
In your case, it should be,
elasticsearch {
protocol => "http"
host => "192.168.248.4:9200"
}
If it's not working, then the problem is with the network address configuration.In order to make sure you have provided the correct configuration,
Check the http.port property of ES
Check network.bind_host property of ES
Check network.publish_host property of ES

Resources