The code mentioned is my logstash conf file . I provide my nginx access log file as input and output to elasticsearch .I also write the output to a text file which works fine .. But the output is never been written to elasticsearch.
input {
file {
path => "filepath"
start_position => "beginning"
}
}
output {
file {
path => "filepath"
}
elasticsearch {
host => localhost
port => "9200"
}
}
I also tried executing logstash binary from command line using -e option
input { stdin{ } output { elasticsearch { host => localhost } }
which works fine. I get the output written to elasticsearch.. But in the former case i dont . Help me solve this
I tried a few things, I have no idea why your case with just host works. If I try it, i get timeouts. This is the configuration that works for me:
elasticsearch {
protocol => "http"
host => "localhost"
port => "9200"
}
I tried with logstash 1.4.2 and elasticsearch 1.4.4
Related
I am very very new to ELK, I installed ELK version 5.6.12 on CentOS sever. Elasticsearch and Kibana works fine. But I cannot connect Logstash to Elastic search.
I have set environment variable as
export JAVA_HOME=/usr/local/jdk1.8.0_131
export PATH=/usr/local/jdk1.8.0_131/bin:$PATH
I run simple test :
bin/logstash -e 'input { stdin { } } output { elasticsearch { host => localhost:9200 protocol => "http" port => "9200" } }'
I get error :
WARNING: Could not find logstash.yml which is typically located in
$LS_HOME/config or /etc/logstash. You can specify the path using --
path.settings. Continuing using the defaults
Could not find log4j2 configuration at path
/etc/logstash/logstash.yml/log4j2.properties. Using default config which
logs errors to the console
Simple "slash" mentioned in official documentation of Logstash works like following :
$bin/logstash -e 'input { stdin { } } output { stdout {} }'
Hello
WARNING: Could not find logstash.yml which is typically located in
$LS_HOME/config or /etc/logstash. You can specify the path using --
path.settings. Continuing using the defaults Could not find log4j2
configuration at path /usr/share/logstash/config/log4j2.properties.
Using default config which logs errors to the console
The stdin plugin is now waiting for input: {
"#version" => "1",
"host" => "localhost",
"#timestamp" => 2018-11-01T04:44:58.648Z,
"message" => "Hello" }
What could be the problem?
I have a Vagrant image in which there is an application; it is reachable in the Vagrant image if you call the port 2401 and depending on the service that you want, you call a specific address (i.e. "curl -X GET http://127.0.0.1:2401/provider/ipfix"). To retrieve the output outside the Vagrant machine I have set a port forwarding in the Vagrant file ("config.vm.network :forwarded_port, guest: 2401, host: 8080"), thus using the command "curl -X GET http://127.0.0.1:8080/provider/ipfix" from host I get the same output.
I am now on the phase of installing Logstash. My issue is that when I run Logstash with the config file I get the error "Address already in use". I tried to use also fields to guide to the specific output. Below is my Logstash config file. What workaround would you suggest?
input {
tcp {
host => localhost
port => 8080
add_field => {
"field1" => "provider"
"field2" => "ipfix"
}
codec => netflow {
versions => [10]
target => ipfix
}
type => ipfix
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
index => "IPFIX-logstash-%{+YYYY.MM.dd}"
}
}
If I'm reading this right, you're expecting Logstash to use TCP to connect to localhost:8080 to fetch information that it will then process.
That's not what this input does. This creates a listener on 127.0.0.1:8080, so the error message about 'already in use' is quite correct.
Considering you're using curl as an example of fetching this data, I suggest the http_poller plugin is better for what you want.
input {
http_poller {
urls => {
IPFIX => "http://127.0.0.1:8080/provider/ipfix"
}
request_timeout => 30
schedule => { "every" => "5s" }
add_tags => [ 'ipfix' ]
}
}
This will hit the known-working CURL URL every 5 seconds with a GET request.
I have installed ELK stack on windows and configured Logstash to read an Apache Log file. I cant seem to see the output in Elasticsearch. I am very new to ELK stack.
Environment Setup
Elasticsearch: http://localhost:9200/
Logstash :
Kibana : http://localhost:5601/
All 3 applications above are running as a service.
I have created a file called "logstash.conf" to read apache logs in "C:\Elk\logstash\conf\logstash.conf" with the following :
input {
file {
path => "C:\Elk\apache.log"
start_position => "beginning"
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
}
I then restarted my Logstash service and now wish to see if elasticsearch is indexing the content of my log. How do i go about doing this ?
try adding following lines to your logstash conf and let us know if there are any grokparsing failures...which would mean your pattern used in filter section is not correct..
output {
stdout { codec => json }
file { path => "C:/POC/output3.txt" }
}
I wrote a .conf file as in the example given in the Logstash documentation and tried to run it. Logstash started but when I gave the input it gave the error as mentioned in the title.
I am using Windows 8.1 and the .conf file in saved in the logstash-1.5.0/bin.
Here is the .conf file:
input { stdin { } }
output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}
Here is the screenshot of the command prompt:
Try with this, "logstash" should be the same name of your cluster in Elasticsearch.yml
output {
elasticsearch {
cluster => "logstash"
}
}
I found the error. It was because I have not installed elasticsearch before running logstash.
Thanks for trying to helping me out
I'm Unable to load index to elasticsearch using logstash. The follwing are my logstash.conf settings. To me config settings seems fine. Please help if I'm missing something.
Assume that Logstash & elastic search services are running fine.
input {
file {
type => "IISLog"
path => "C:/inetpub/logs/LogFiles/W3SVC1/u_ex140930.log"
start_postition => "beginning"
}
}
output {
stdout { debug => true debug_format => "ruby"}
elasticsearch_http {
host => "localhost"
port => 9200
protocol => "http"
index => "iislogs2"
}
}
You can start with checking the following:
Check the logstash log file for errors.
Run the following command:telnet localhost 9200 and verify you are able to connect.
Check elasticsearch log files for errors.