Unknown setting 'protocol' for elasticsearch 5.1.1 - elasticsearch

So I've been looking for the way to solve this issue all day long.
but all I've got is for the old version of elasticsearch.
fyi, i use the latest version of elk stack.
elasticsearch version : 5.1.1
kibana version : 5.1.1
logstash version : 5.1.1
This is my apache conf :
input {
file {
path => '/Applications/XAMPP/xamppfiles/logs/access_log'
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
}
output {
elasticsearch { protocol => "http" }
}
That file used to access log data from apache.
But when I run the logstash, with :
logstash -f apache.conf
I got this error message.
That message told me that something wrong with my configuration.
the http protocol is doesnt exist anymore i guess.
Can you tell me how to fix it?
Many thanks return

There is no protocol setting in the elasticsearch output anymore. Simply modify your output to this:
output {
elasticsearch {
hosts => "localhost:9200"
}
}

Related

How to extract service name from document field in Logstash

I am stuck in middle of ELK- Stack configuration, any lead will be highly appreciated.
Case Study:
I am able to see the logs(parsed through logstash without any filter) but I want to apply filter's while parsing the logs.
For ex:
system.process.cmdline: "C:\example1\example.exe" -displayname "example.run" -servicename "example.run"
I can see the above logs in kibana dashboard but I want only the -servicename keys, value.
Expected output in Kibana, where servicename is an index and example.run will be associate value.
servicename "example.run"
I am newbie in ELK.So, Please help me out...
My environment:
Elasticsearch- 6.6
Kibana- 6.6
Logstash- 6.6
Filebeat- 6.6
Metricbeat- 6.6
Logs coming from- Windows server 2016
input {
beats {
port => "5044"
}
}
filter {
grok{
match =>{"message" => "%{NOSPACE:hostname} "}
}
}
output {
file {
path => "/var/log/logstash/out.log"
}
}
I have tried with the above logstash pipeline. But i am not successfull in getting the required result. Assuming i have to add more lines in filter but don't know what exactly.
use this in you filter:
grok{
match => { "message" => "%{GREEDYDATA:ignore}-servicename \"%{DATA:serviceName}\"" }
}
your service name should be now in serviceName key

Elasticsearch index not created from logstash indexer

I've setup a simple elastic stack like so:
LS shipper => Kafka => LS Indexer => ES => Kibana
I'm using all the latest versions. (5.2.2-1)
My indices are not being created on Elasticsearch so I've checked at every level.
I can see my logs coming all the way to the LS indexer.
[2017-03-14T16:08:01,360][DEBUG][logstash.pipeline ] output received {"event"=>{"#timestamp"=>2017-03-14T15:08:01.355Z, "#version"=>"1", "message"=>"{\"severity\":6,\"timestamp8601\":\"2017-03-14T16:08:01+01:00\",\"pid\":\"65156\",\"program\":\"CROND\",\"message\":\"(root) CMD (/home/unix/cron/iodisk >/dev/null 2>&1)||syslog source origin:not defined or not authorized|syslog source name:not defined or not authorized|syslog source env:not defined or not authorized|syslog source security level:0|syslog time received:2017-03-14T16:08:01.349084+01:00|syslog time reported:2017-03-14T16:08:01+01:00||\\n\",\"priority\":78,\"logsource\":\"VRHNDCPUPAPPPR1\",\"type\":\"system\",\"#timestamp\":\"2017-03-14T15:08:01.000Z\",\"#version\":\"1\",\"host\":\"10.64.1.202\",\"facility\":9,\"severity_label\":\"Informational\",\"source_indexer\":\"tcp.50050\",\"timestamp\":\"2017-03-14T16:08:01+01:00\",\"facility_label\":\"clock\"}"}}
Here is my indexer config file:
input {
kafka {
bootstrap_servers => "10.64.2.143:9092"
group_id => "logstash indexer"
topics => "system"
}
}
output {
if [type == "system"] {
elasticsearch {
codec => json
hosts => [ "10.64.2.144:9200" ]
index => "system"
}
}
}
Of course, i can't find any index named system in kibana:
Kibana index pattern configuration
No index created
I'm available for more info if someone is ready to help.
Thanks,
I suspect your conditional is wrong
if [type == "system"] {
I suspect that should be:
if [type] == "system" {
That will probably work better.

How to generate reports on existing dump of logs using ELK?

Using ELK stack, is it possible to generate reports on existing dump of logs?
For example:
I have some 2 GB of Apache access logs and I want to have the dashboard reports showing:
All requests, with status code 400
All requests, with pattern like "GET http://example.com/abc/.*"
Appreciate, any example links.
Yes, it is possible. You should:
Install and setup the ELK stack.
Install filebeat, configure it to harvest your logs, and to forward the data to logstash.
In logstash, listen to filebeat input, use the grok to process/break up your data, and forward it to elastichsearch something like:
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{COMMONAPACHELOG}" }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "filebeat-logstash-%{+YYYY.MM.dd}"
}
}
In kibana, setup your indices, and query for data, e.g.
response: 400
verb: GET AND message: "http://example.com/abc/"

Update from Logstash to Elastic Search failed

I want to parse a simple logfile with logstash and post the results to elastic search. I've configured logstash according to the log stash documentation.
But Logstash reports this error:
Attempted to send a bulk request to Elasticsearch configured at '["http://localhost:9200/"]',
but Elasticsearch appears to be unreachable or down!
{:client_config=>{:hosts=>["http://localhost:9200/"], :ssl=>nil,
:transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil,
:ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore,
:logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false,
:reload_on_failure=>false, :randomize_hosts=>false}, :error_message=>"Connection refused",
:level=>:error}
My configuration looks like this:
input { stdin{} }
filter {
grok {
match => { "message" => "%{NOTSPACE:demo}"}
}
}
output {
elasticsearch { hosts => "localhost:9200"}
}
Of course elastic search is available when calling http://localhost:9200/
Versions: logstash-2.0.0, elasticsearch-2.0.0
OSX
I've found a thread with a similar issue. But this seems to be a bug in an older logstash version.
I changed localhost to 127.0.0.1
This works:
output {
elasticsearch { hosts => "127.0.0.1:9200"}
}

How to allow cross domain access in elasticsearch 2.0.0?

Im, Trying to enable cross domain access in elasticsearch 2.0.0.
The following is output configuration in logstash 2.0.0 :
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => "localhost"
header => {"Access-Control-Allow-Origin": "true"}
index => "testindex"
}
However I am getting the following error:
Error: Expected one of #, => at line 28, column 42 (byte 500) after output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => "localhost"
header => {"Access-Control-Allow-Origin"
Could someone please tell me what I am doing wrong here.
Thanks
PS: I think this is most likely a syntax error cause when I remove the header from output everything else works fine.
You can try using * or the actual domain name from where you are making the request.
"Access-Control-Allow-Origin": "*" or
"Access-Control-Allow-Origin": "http://yourdomain.com"
refer:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS

Resources