Logstash wont talk to Elastic Search - elasticsearch

I have Elastic Search 1.3.2 via ELMA. The ELMA setup places ES REST API behind an Apache reverse proxy with SSL and basic auth.
On a separate host, I am trying to setup Logstash 1.4.2 to forward some information over to ES. The output part of my LS is as follows:
output {
stdout { codec => rubydebug }
elasticsearch {
host => "192.168.248.4"
}
This produces the following error:
log4j, [2014-09-25T01:40:02.082] WARN: org.elasticsearch.discovery: [logstash-ubuntu-jboss-39160-4018] waited for 30s and no initial state was set by the discovery
I then tried setting the protocol to HTTP as follows:
elasticsearch {
host => "192.168.248.4"
protocol => "http"
}
This produces a connection refused error:
Faraday::ConnectionFailed: Connection refused - Connection refused
I have then tried setting the port to 9200 (which gives connection refused error) and 9300 which gives:
Faraday::ConnectionFailed: End of file reached
Any ideas on how I can get logstash talking to my ES?

The way to inform logstash to set output in ES is :
elasticsearch {
protocol => "http"
host => "EShostname:EsportNo"
}
In your case, it should be,
elasticsearch {
protocol => "http"
host => "192.168.248.4:9200"
}
If it's not working, then the problem is with the network address configuration.In order to make sure you have provided the correct configuration,
Check the http.port property of ES
Check network.bind_host property of ES
Check network.publish_host property of ES

Related

Source of host Variable in Logstash

I'm using ELK (Kibana, ElasticSearch and Logstash are running as Docker containers) and LogstashTcpSocketAppender in Spring boot app to forward data to logstash.
Logstash config is very simple:
input {
tcp {
port => 4560
codec => json_lines
}
}
output {
elasticsearch {
hosts => [ "elasticsearch:9200" ]
}
}
The issue is that in Kibana I see host field with "gateway" value - "host: gateway".
What I don't understand is HOW is this field populated and added to logstash-* index in Kibana as:
I do not set any host variable in logback config and can clearly see it's not going out of there.
This might be set by logstash itself. But I couldn't find a concrete reference in logback documentation of how this field is being populated. And what does "gateway" really mean?
This is very confusive to me.
Could anyone please explain.
Thanks in advance.

Filebeat sent Logs to Logstash thought nginx proxy

I am trying to make Filbeat sending logs to Logstash using docker containers.
The problem is that I have an nginx proxy in between and Filbeat-Logstash communication is not based on HTTPS.
What is the solutions to make it working?
I was trying to make nginx able to process tcp streams configuring it in this way:
stream {
upstream logs {
server logstash:5044;
}
server {
listen 5088;
proxy_pass logs;
}
}
And this is my filebeat output config:
output.logstash:
hosts: ["IP_OF_NGINX:5088"]
ssl.verification_mode: none
But it seems not to work.
Filebeat shows me this error in its logs:
pipeline/output.go:100 Failed to connect to backoff(async(tcp://IP_OF_NGINX:5088)): dial tcp IP_OF_NGINX:5088: connect: connection refused
Any help?

Connection refused from filebeat to logstash

I have an issue when I try to connect to my logstash from Filebeat
Logstash version 2.0.0
Filebeat 1.0.1
Here the error
INFO Connecting error publishing events (retrying): dial tcp 192.168.50.5:14560: getsockopt: connection refused
This is my logstash configuration
input {
beats {
codec => json
port => 14560
}
}
output {
elasticsearch { hosts=> localhost}
stdout {codec = > rubydebug}
}
Here my filebeat configuration
logstash:
# The Logstash hosts
hosts: ["192.168.50.5:14560","192.168.50.15:14560"]
I install the filebeat logstash plugin as I have read it
./plugin install logstash-input-beats
I have completely run out of ideas, and I would love to use this framework, but it seems not to be responding at all.
Any ideas would be great.
This happens when your logstash is not up or the logstash host is not getting connected (due to firewall maybe) from the host running filebeat . Try doing a telnet to 192.168.50.5 14560 from the host you are running filebeat.

Kibana and Elasticsearch error

I want to access to Kibana by http://IP:80.
Nevertheless when I visit the pageI obtain these errors:
Upgrade Required Your version of Elasticsearch is too old. Kibana
requires Elasticsearch 0.90.9 or above.
and
Error Could not reach http://localhost:80/_nodes. If you are using a
proxy, ensure it is configured correctly
I have been looking up these problems on the internet and I have included these lines without success...
http.cors.enabled: true
http.cors.allow-origin: http://localhost:80
My Elasticsearch version is in fact 0.90.9.
What could I do?
please help me
According to my scenario Logstash using node protocol by default. if you apply command :
curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
if you are getting "number_of_nodes" : 2, means logstash using node protocol and becoming part of cluster .so kibana taking it other node that was in older version of elasticsearch.
solution :
put protocol => transport in logstash config file for shipping to ES.
like,
input { }
output {
elasticsearch {
action => ... # string (optional), default: "index"
embedded_http_port => ... # string (optional), default: "9200-9300"
index => ... # string (optional), default: "logstash-%{+YYYY.MM.dd}"
node_name => ... # string (optional)
port => ... # string (optional)
protocol => ... # string, one of ["node", "transport", "http"]
}
if you want to access on port 80 than you have to do proxy . otherwise kibana listen on 5601 by default. if you still facing same issue then use latest version of logstash + kibana +elasticsearch .
Download advanced version of elasticsearch as the version you are using is not compatible with Kibana. Try using latest elasticsearch version.

Logstash ganglia input plugin - udp listener died

I am using Logstash ganglia input plugin. Ganglia gmond daemon and logstash are installed on same machine. Gmond send metrics to itself. Here is the gmond configuration.
udp_send_channel {
host = 10.0.3.167
port = 8649
ttl = 1
}
Logstash configuration file is like this :
input {
ganglia {
host => "127.0.0.1"
type => "ganglia"
}
}
output {
elasticsearch {
host => "10.0.3.168"
}
}
While logstash connecting to port which gmond unicasts, I am getting this error:
{:timestamp=>"2014-01-04T12:50:38.422000+0000",
:message=>"ganglia udp listener died",
:address=>"127.0.0.1:8649",
:exception=>#<SocketError: bind: name or service not known>,
:backtrace=>
[
"org/jruby/ext/socket/RubyUDPSocket.java:160:in `bind'",
"file:/etc/logstash/logstash.jar!/logstash/inputs/ganglia.rb:61:in `udp_listener'",
"file:/etc/logstash/logstash.jar!/logstash/inputs/ganglia.rb:39:in `run'",
"file:/etc/logstash/logstash.jar!/logstash/pipeline.rb:156:in `inputworker'",
"file:/etc/logstash/logstash.jar!/logstash/pipeline.rb:150:in `start_input'"
],
:level=>:warn}
Any help is appreciated, thanks in advance.
I am answering my question, hopefully someone do not spend some time to find out this error.
Logstash ganglia input plugin could not connect to port since ganglia monitoring agent (gmond) is running on the same port with logstash. Either u need to redirect gmond to another port or reconfigure ganglia input plugin of logstash.

Resources