elasticsearch not receiving from localhost - elasticsearch

I run logstash(192.168.56.100) and elasticsearch(192.168.56.100) on same host, but elasticsearch just create index and not receiving data. And I try run logstash(192.168.56.101) on different host, and elasticsearch(192.168.56.100) can receiving data. this my elasticsearch and logstash config
example data : 12345~78904
input {
file {
path => "/tmp/tmp.log"
}
}
filter {
grok {
match => ["message" , '%{INT:shopID}~%{INT:userID}']
}
}
output {
elasticsearch {
hosts => "127.0.0.1:9200"
index => "shop"
}
}
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
elasticsearch version 5.1.1 -
logstash version 2.4.0

Do you have any errors in logstash log? Try instead 0.0.0.0 set in elasticsearch.yml _local_ or 127.0.0.1. https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#network-interface-values

Related

How to add host name for beats input in logstash

Let me explain my existing structure, I have 4 servers (Web Server, API Server, Database server, SSIS Severs) and installed filebeat and winlog in all four servers and from there I am getting all logs in my logstash, but here is the thing every log I am getting in message body, and for some messages I am getting difficulty to write correct GROK pattern, is there anyway I can get the pattern from Kibana (FYI as of now I am storing all logs in elasticsearch which I can see through Kibana.)
My Logstash config is look like -
1. Api-Pipeline
input {
beats {
host => "IP Address where my filebeat (API Server) is running"
port => 5044
}
}
2. DB Pipeline
input {
beats {
host => "IP Address where my filebeat (Database Server) is running"
port => 5044
}
}
It's working when I used only port and the moment I add host it stopped working. Can anyone help me here.
Below I am trying to achieve
Here I made change, Does it work because I need to write lengthy filters and that's why I wanted to have in separate files
Filebeat.yml on API Server
-----------------------------------------------------------------------------------------
filebeat.inputs:
- type: log
source: 'ApiServerName' // MyAPIServerName(Same Server Where I have installed filebeat)
enabled: true
paths:
- C:\Windows\System32\LogFiles\SMTPSVC1\*.log
- E:\AppLogs\*.json
scan_frequency: 10s
ignore_older: 24h
filebeat.config.modules:
path: C:\Program Files\Filebeat\modules.d\iis.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
host: "kibanaServerName:5601"
output.logstash:
hosts: ["logstashServerName:5044"]
Logstash Configuration
----------------------------------------------------------------
Pipeline.yml
- pipeline.id: beats-server
config.string: |
input { beats { port => 5044 } }
output {
if [source] == 'APISERVERNAME' {
pipeline { send_to => apilog }
} else if [source] == 'DBSERVERNAME' {
pipeline { send_to => dblog }
}
else{
pipeline { send_to => defaultlog }
}
}
- pipeline.id: apilog-processing
path.config: "/Logstash/config/pipelines/apilogpipeline.conf"
- pipeline.id: dblog-processing
path.config: "/Logstash/config/pipelines/dblogpipeline.conf"
- pipeline.id: defaultlog-processing
path.config: "/Logstash/config/pipelines/defaultlogpipeline.conf"
1. apilogpipeline.conf
----------------------------------------------------------
input {
pipeline {
address => apilog
}
}
output {
file {
path => ["C:/Logs/apilog_%{+yyyy_MM_dd}.log"]
}
}
2. dbilogpipeline.conf
---------------------------------------------------------
input {
pipeline {
address => dblog
}
}
output {
file {
path => ["C:/Logs/dblog_%{+yyyy_MM_dd}.log"]
}
}
3. defaultlogpipeline.conf
---------------------------------------------------------
input {
pipeline {
address => defaultlog
}
}
output {
file {
path => ["C:/Logs/defaultlog_%{+yyyy_MM_dd}.log"]
}
}
It works the other way around, i.e. it's not Logstash that connects to Filebeat but Filebeat that sends data to Logstash. So in your input section, the host needs to be the name of the host where Logstash is running.
beats {
host => "logstash-host"
port => 5044
}
Then in your Filebeat configuration, you need to configure the Logstash output like this:
output.logstash:
hosts: ["logstash-host:5044"]
Since you have multiple Filebeat sources and want to apply a dedicated pipeline to each, what you can do is to define one custom field or tag in each Filebeat config (e.g. source: db, source: api-server, etc) and then in Logstash you can apply a different logic based on those values.
filebeat.inputs:
- type: log
fields:
source: 'APISERVERNAME'
fields_under_root: true
In Logstash, you can either leverage conditionals or pipeline to pipeline communication in order to apply a different logic based on event data.
On the latest link, you can see an example of the distributor pattern which is pretty much what you're after.

Change http_port number of Elasticsearch to 80

I am trying to setup ELK on Ubuntu 18.04 and I only have port 80 for now to test Elasticsearch dashboard so I modified the elasticsearch.yml as below
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: x.x.x.x
#
# Set a custom port for HTTP:
#
http.port: 80
#
# For more information, consult the network module documentation.
#
But in logstash logs it says
[2019-05-10T08:46:01,216][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://x.x.x.x:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://x.x.x.x:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
I think it is trying to find elasticsearch on 9200 ..
Any help on this will be appreciated
You need to edit the output port of logstash, you can look it up here.
input { stdin { } }
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch { hosts => ["localhost:9200"] } <-- change this
stdout { codec => rubydebug }
}
Change http.port into elasticsearch.yml file from config directory

UnresolvedAddressException in Logstash+elasticsearch

Logstash is not working in my system(Windows 7).I am using Logstash-1.4.0, kibana-3.0.0, Elasticsearch-1.3.0 version installed in my system.
I created logstash.conf file in logstash-1.4.0 (Logstash-1.4.0/logstash.conf).
input {
file {
path => “C:/apache-tomcat-7.0.62/logs/*access*”
}
}
filter {
date {
match => [ “timestamp” , “dd/MMM/yyyy:HH:mm:ss Z” ]
}
}
output {
elasticsearch { host => “localhost:9205″}
}
And I run the logstash
c:\logstash-1.4.0\bin>logstash agent -f ../logstash.conf
Getting below Exception
log4j, [2015-06-09T15:24:45.342] WARN: org.elasticsearch.transport.netty: [logstash-IT-BHARADWAJ-512441] exception caught on transport layer [[id: 0x0ee1f960]], closing connection
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:123)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:621)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
etc……..
How to solve this problem
You cant connect to the socket, by default elasticsearch sitting on 9200 port for http and 9300 for tcp. Try change it for 9200 first, its default case.

Unable to load index to elasticsearch using logstash

I'm Unable to load index to elasticsearch using logstash. The follwing are my logstash.conf settings. To me config settings seems fine. Please help if I'm missing something.
Assume that Logstash & elastic search services are running fine.
input {
file {
type => "IISLog"
path => "C:/inetpub/logs/LogFiles/W3SVC1/u_ex140930.log"
start_postition => "beginning"
}
}
output {
stdout { debug => true debug_format => "ruby"}
elasticsearch_http {
host => "localhost"
port => 9200
protocol => "http"
index => "iislogs2"
}
}
You can start with checking the following:
Check the logstash log file for errors.
Run the following command:telnet localhost 9200 and verify you are able to connect.
Check elasticsearch log files for errors.

Logstash with Elasticsearch

I am trying to connect Logstash with Elasticsearch but cannot get it working.
Here is my logstash conf:
input {
stdin {
type => "stdin-type"
}
file {
type => "syslog-ng"
# Wildcards work, here :)
path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]
}
}
output {
stdout { }
elasticsearch{
type => "all"
embedded => false
host => "192.168.0.23"
port => "9300"
cluster => "logstash-cluster"
node_name => "logstash"
}
}
And I only changed these details in my elasticsearch.yml
cluster.name: logstash-cluster
node.name: "logstash"
node.master: false
network.bind_host: 192.168.0.23
network.publish_host: 192.168.0.23
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["localhost"]
With these configurations I could not make Logstash connect to ES. Can someone please suggest where I am going wrong?
First, I suggest matching your "type" attributes up.
In your input you have 2 different types, and in your output you have a type that doesn't exists in any of your inputs.
For testing, change your output to:
output {
stdout { }
elasticsearch{
type => "stdin-type"
embedded => false
host => "192.168.0.23"
port => "9300"
cluster => "logstash-cluster"
node_name => "logstash"
}
}
Then,have you created an index on your ES instance?
From the guides I've used, and my own experience (others may have another way that works) I've always used an index so that when I push something into ES, I can use the ES API and quickly check if the data has gone in or not.
Another suggestion would be to simply run your Logstash forwarder and indexer with debug flags to see what is going on behind the scenes.
Can you connect to your ES instance on 127.0.0.1? Also, try to experiment with the port and host. As a rather new user of the Logstash system, I found that my understanding at the start went against the reality of the setup. Sometimes the host IP isn't what you think it is, as well as the port. If you are willing to check your network and identify listening ports and IPs, then you can sort this out, otherwise do some intelligent trial and error.
I highly recommend this guide as a comprehensive starting point. Both points I've mentioned are (in)directly touched upon in the guide. While the guide has a slightly more complex starting point, the ideas and concepts are thorough.
I could not make Logstash connect to ES
This happened to me when my logstash and elasticsearch versions were out of sync
from the docs:
VERSION NOTE: Your Elasticsearch cluster must be running Elasticsearch
1.1.1. If you use any other version of Elasticsearch, you should set protocol => http in this plugin.
Setting protocol => http explicitly as outlined above fixed it for me.
As Adam said, the thing was the protocol setting, so only for testing I did:
logstash -e 'input { stdin { } } output { elasticsearch { host => localhost protocol => "http" port => "9200" } }'
And that seems to be working on OSX. Issue here.
Following is tested on
elasticsearch:5.4.0
and
logstash:5.4.0
(I have use docker container on OpenStack)
For Elasticsearch :
/usr/share/elasticsearch/config/elasticsearch.yml should look like as follows -
cluster.name: "docker-cluster"
network.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
No change in any other files of /usr/share/elasticsearch/config/ is required
Run Elasticsearch using simple command -
sudo docker run --name elasticsearch -p 9200:9200 docker.elastic.co/elasticsearch/elasticsearch:5.4.0
For Logstash :
/usr/share/logstash/config/logstash.yml should look like as follows -
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
# http://111.*.*.11:9200 is the IP & Port of Elasticsearch's server
xpack.monitoring.elasticsearch.url: http://111.*.*.11:9200
# "elastic" is the user name of Elasticsearch's account
xpack.monitoring.elasticsearch.username: elastic
# "changeme" is the password of Elasticsearch's "elastic" user
xpack.monitoring.elasticsearch.password: changeme
No change in any other files of /usr/share/logstash/config/ is required
/usr/share/logstash/pipeline/logstash.conf should look like as follows -
input {
file {
path => "/usr/share/logstash/test_i.log"
}
}
output {
elasticsearch {
# http://111.*.*.11:9200 is the IP & Port of Elasticsearch's server
hosts => ["http://111.*.*.11:9200"]
# "elastic" is the user name of Elasticsearch's account
user => "elastic"
# "changeme" is the password of Elasticsearch's "elastic" user
password => "changeme"
}
}
Run Logstash using simple command -
sudo docker run --name logstash --expose 25826 -p 25826:25826 docker.elastic.co/logstash/logstash:5.4.0 --debug
NOTE : Need not to do any configuration before running Docker containers. At first run the container using simple commands as mentioned above. Then go to corresponding dir, make the required changes, save it, exit container & restart the container, changes will be reflected.
I have had the same error message, and it took me a while to discover in the TRACE log of elasticsearch's discovery process that the ip address logstash was using was incorrect.
I had several ip addresses and logstash used the wrong one. After that, everything went okay.
First,you don't need to create an index in ES.
Because,you don`t need to create "index" in elasticsearch;when the logstash assign the index,the index will be created automatically.
By the way,if you did not set the index value,it will be set as default value as "logstash-%{+YYYY.MM.dd}"
(you can check this in logstash offcial guide)~
Second,you may not keep your "elastic type" the same type as your "input type";you can also write your output like this:
output {
stdout { }
elasticsearch{
embedded => false
host => "192.168.0.23"
port => "9300"
index => "a_new_index"
cluster => "logstash-cluster"
node_name => "logstash"
document_type =>"my-own-type"
}
}
With the "document_type",you can save your logs into the any type you want~
Finally,if you don`t want to assign the "document_type";it will be set the same with your "input type"
Or even you forget to assign type in "all of the configuration file";the type will be set as default value as logs~
OK,enjoy it~
I have a two node cluster of elastisearch, and only one for logstatsh.
This config works for me:
Node elk1:
#/etc/elasticsearch/elasticsearch.yml
script.disable_dynamic: true
cluster.name: elk-fibra
node.name: "elk1"
node.master: true
node.data: true
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["elk1.lab.fibra"]
root#elk1:
#/etc/logstash/conf.d/30-lumberjack-output.conf
output {
elasticsearch { host => localhost protocol => "http" port => "9200" }
stdout { codec => rubydebug }
}
Node elk2:
#/etc/elasticsearch/elasticsearch.yml
script.disable_dynamic: true
cluster.name: elk-fibra
node.name: "elk2"
node.master: false
node.data: true
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["elk1.lab.fibra"]
input => Logstash
output => ElasticSearch
input{
http {
port => 5044
response_headers => {
"Access-Control-Allow-Origin" => "*"
"Content-Type" => "text/plain"
"Access-Control-Allow-Headers" => "Origin, X-Requested-With, Content-Type, Accept"
}
}
}
output{
elasticsearch {
hosts => ["http://elasticsearch:9200"]
index => "logstash-%{+YYYY.MM.dd}"
user => elastic
password => ****
}

Resources