Logstash Geoip does not output coordinates as expected - elasticsearch

I'm trying to set long and lat for the Kibana Bettermap using Geoip. I'm using Logstash 1.4.2 and Elasticsearch 1.1.1 and the following is my configuration file:
input
{
stdin { }
}
filter
{
geoip
{
source => "ip"
}
}
output
{
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}
When I send the following example ip address:
"ip":"00.00.00.00"
The result is as follows:
{
"message" => "\"ip\":\"00.000.00.00\"",
"#version" => "1",
"#timestamp" => "2014-10-20T22:23:12.334Z",
}
As you can see, no geoip coordinates, and nothing on my Kibana Bettermap. What can I do to get this Bettermap to work?

You aren't parsing the message... Either add codec => json to your stdin and send in {"ip":"8.8.8.8"} or use a grok filter to parse your input:
grok { match => ['message', '%{IP:ip}' ] }

Related

Elasticsearch Logstash Filebeat mapping

Im having a problem with ELK Stack + Filebeat.
Filebeat is sending apache-like logs to Logstash, which should be parsing the lines. Elasticsearch should be storing the split data in fields so i can visualize them using Kibana.
Problem:
Elasticsearch recieves the logs but stores them in a single "message" field.
Desired solution:
Input:
10.0.0.1 some.hostname.at - [27/Jun/2017:23:59:59 +0200]
ES:
"ip":"10.0.0.1"
"hostname":"some.hostname.at"
"timestamp":"27/Jun/2017:23:59:59 +0200"
My logstash configuration:
input {
beats {
port => 5044
}
}
filter {
if [type] == "web-apache" {
grok {
patterns_dir => ["./patterns"]
match => { "message" => "IP: %{IPV4:client_ip}, Hostname: %{HOSTNAME:hostname}, - \[timestamp: %{HTTPDATE:timestamp}\]" }
break_on_match => false
remove_field => [ "message" ]
}
date {
locale => "en"
timezone => "Europe/Vienna"
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
useragent {
source => "agent"
prefix => "browser_"
}
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["localhost:9200"]
index => "test1"
document_type => "accessAPI"
}
}
My Elasticsearch discover output:
I hope there are any ELK experts around that can help me.
Thank you in advance,
Matthias
The grok filter you stated will not work here.
Try using:
%{IPV4:client_ip} %{HOSTNAME:hostname} - \[%{HTTPDATE:timestamp}\]
There is no need to specify desired names seperately in front of the field names (you're not trying to format the message here, but to extract seperate fields), just stating the field name in brackets after the ':' will lead to the result you want.
Also, use the overwrite-function instead of remove_field for message.
More information here:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#plugins-filters-grok-options
It will look similar to that in the end:
filter {
grok {
match => { "message" => "%{IPV4:client_ip} %{HOSTNAME:hostname} - \[%{HTTPDATE:timestamp}\]" }
overwrite => [ "message" ]
}
}
You can test grok filters here:
http://grokconstructor.appspot.com/do/match

ELK - GROK Pattern for Winston logs

I have setup local ELK. All works fine, but before trying to write my own GROK pattern I wonder is there already one for Winston style logs?
That works great for Apache style log.
I would need something that works for Winston style. I think JSON filter would do the trick, but I am not sure.
This is my Winston JSON:
{"level":"warn","message":"my message","timestamp":"2017-03-31T11:00:27.347Z"}
This is my Logstash configuration file example:
input {
beats {
port => "5043"
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
}
}
For some reason it is not getting parsed. No error.
Try like this instead:
input {
beats {
port => "5043"
codec => json
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
}
}

Parse url parameters in logstash

I'm sending data in a url to logstash 5.2 and I would like to parse it in logstash, so every url parameter becomes a variable in logstash and I can visualize it properly in kibana.
http://127.0.0.1:31311/?id=ID-XXXXXXXX&uid=1-37zbcuvs-izotbvbe&ev=pageload&ed=&v=1&dl=http://127.0.0.1/openpixel/&rl=&ts=1488314512294&de=windows-1252&sr=1600x900&vp=1600x303&cd=24&dt=&bn=Chrome%2056&md=false&ua=Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_11_3)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/56.0.2924.87%20Safari/537.36&utm_source=&utm_medium=&utm_term=&utm_content=&utm_campaign=
This is my logstash conf file:
input
{
http
{
host => "127.0.0.1"
port => 31311
}
}
output
{
elasticsearch
{
hosts => ["localhost:9200"]
}
stdout
{
codec => rubydebug
}
}
You could use the grok filter to match your params in your url as such:
filter {
grok {
match => [ "message", "%{URIPARAM:url}" ]
}
And then you might have to use kv filter in order to split your data:
kv {
source => "url"
field_split => "&"
}
This SO might become handy. Hope this helps!

using logstash to get data out of elasticsearch to a csv file

I am new to using logstash and am struggling to get data out of elasticsearch using logstash as a csv.
To create some sample data, we can first add a basic csv into elasticsearch... the head of the sample csv can be seen below
$ head uu.csv
"hh","hh1","hh3","id"
-0.979646332669359,1.65186132910743,"L",1
-0.283939374784435,-0.44785377794233,"X",2
0.922659898930901,-1.11689020559612,"F",3
0.348918777124474,1.95766948269957,"U",4
0.52667811182958,0.0168862169880919,"Y",5
-0.804765331279075,-0.186456470768865,"I",6
0.11411203100637,-0.149340801708981,"Q",7
-0.952836952412902,-1.68807271639322,"Q",8
-0.373528919496876,0.750994450392907,"F",9
I then put that into logstash using the following...
$ cat uu.conf
input {
stdin {}
}
filter {
csv {
columns => [
"hh","hh1","hh3","id"
]
}
if [hh1] == "hh1" {
drop { }
} else {
mutate {
remove_field => [ "message", "host", "#timestamp", "#version" ]
}
mutate {
convert => { "hh" => "float" }
convert => { "hh1" => "float" }
convert => { "hh3" => "string" }
convert => { "id" => "integer" }
}
}
}
output {
stdout { codec => dots }
elasticsearch {
index => "temp_index"
document_type => "temp_doc"
document_id => "%{id}"
}
}
This is put into logstash with the following command....
$ cat uu.csv | logstash-2.1.3/bin/logstash -f uu.conf
Settings: Default filter workers: 16
Logstash startup completed
....................................................................................................Logstash shutdown completed
So far so good, but I would like to get some of the data out in particular the hh and hh3 fields in the temp_index.
I wrote the following to extract the data out of elasticsearch into a csv.
$ cat yy.conf
input {
elasticsearch {
hosts => "localhost:9200"
index => "temp_index"
query => "*"
}
}
filter {
elasticsearch{
add_field => {"hh" => "%{hh}"}
add_field => {"hh3" => "%{hh3}"}
}
}
output {
stdout { codec => dots }
csv {
fields => ['hh','hh3']
path => '/home/username/yy.csv'
}
}
But get the following error when trying to run logstash...
$ logstash-2.1.3/bin/logstash -f yy.conf
The error reported is:
Couldn't find any filter plugin named 'elasticsearch'. Are you sure this is correct? Trying to load the elasticsearch filter plugin resulted in this error: no such file to load -- logstash/filters/elasticsearch
What do I need to change to yy.conf so that a logstash command will extract the data out of elasticsearch and input into a new csv called yy.csv.
UPDATE
changing yy.conf to be the following...
$ cat yy.conf
input {
elasticsearch {
hosts => "localhost:9200"
index => "temp_index"
query => "*"
}
}
filter {}
output {
stdout { codec => dots }
csv {
fields => ['hh','hh3']
path => '/home/username/yy.csv'
}
}
I got the following error...
$ logstash-2.1.3/bin/logstash -f yy.conf
Settings: Default filter workers: 16
Logstash startup completed
A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Elasticsearch hosts=>["localhost:9200"], index=>"temp_index", query=>"*", codec=><LogStash::Codecs::JSON charset=>"UTF-8">, scan=>true, size=>1000, scroll=>"1m", docinfo=>false, docinfo_target=>"#metadata", docinfo_fields=>["_index", "_type", "_id"], ssl=>false>
Error: [400] {"error":{"root_cause":[{"type":"parse_exception","reason":"Failed to derive xcontent"}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"init_scan","grouped":true,"failed_shards":[{"shard":0,"index":"temp_index","node":"zu3E6F7kQRWnDPY5L9zF-w","reason":{"type":"parse_exception","reason":"Failed to derive xcontent"}}]},"status":400} {:level=>:error}
A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Elasticsearch hosts=>["localhost:9200"], index=>"temp_index", query=>"*", codec=><LogStash::Codecs::JSON charset=>"UTF-8">, scan=>true, size=>1000, scroll=>"1m", docinfo=>false, docinfo_target=>"#metadata", docinfo_fields=>["_index", "_type", "_id"], ssl=>false>
Error: [400] {"error":{"root_cause":[{"type":"parse_exception","reason":"Failed to derive xcontent"}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"init_scan","grouped":true,"failed_shards":[{"shard":0,"index":"temp_index","node":"zu3E6F7kQRWnDPY5L9zF-w","reason":{"type":"parse_exception","reason":"Failed to derive xcontent"}}]},"status":400} {:level=>:error}
A plugin had an unrecoverable error. Will restart this plugin.
Interestingly...if i change yy.conf to remove elasticsearch{} to look like...
$ cat yy.conf
input {
elasticsearch {
hosts => "localhost:9200"
index => "temp_index"
query => "*"
}
}
filter {
add_field => {"hh" => "%{hh}"}
add_field => {"hh3" => "%{hh3}"}
}
output {
stdout { codec => dots }
csv {
fields => ['hh','hh3']
path => '/home/username/yy.csv'
}
}
I get the following error...
$ logstash-2.1.3/bin/logstash -f yy.conf
Error: Expected one of #, { at line 10, column 19 (byte 134) after filter {
add_field
You may be interested in the '--configtest' flag which you can
use to validate logstash's configuration before you choose
to restart a running system.
Also when changing yy.conf to be something similar to take into account the error message
$ cat yy.conf
input {
elasticsearch {
hosts => "localhost:9200"
index => "temp_index"
query => "*"
}
}
filter {
add_field {"hh" => "%{hh}"}
add_field {"hh3" => "%{hh3}"}
}
output {
stdout { codec => dots }
csv {
fields => ['hh','hh3']
path => '/home/username/yy.csv'
}
}
I get the following error...
$ logstash-2.1.3/bin/logstash -f yy.conf
The error reported is:
Couldn't find any filter plugin named 'add_field'. Are you sure this is correct? Trying to load the add_field filter plugin resulted in this error: no such file to load -- logstash/filters/add_field
* UPDATE 2 *
Thanks to Val I have made some progress and started to get output. But they don't seem correct. I get the following outputs when running the following commands...
$ cat uu.csv | logstash-2.1.3/bin/logstash -f uu.conf
Settings: Default filter workers: 16
Logstash startup completed
....................................................................................................Logstash shutdown completed
$ logstash-2.1.3/bin/logstash -f yy.conf
Settings: Default filter workers: 16
Logstash startup completed
....................................................................................................Logstash shutdown completed
$ head uu.csv
"hh","hh1","hh3","id"
-0.979646332669359,1.65186132910743,"L",1
-0.283939374784435,-0.44785377794233,"X",2
0.922659898930901,-1.11689020559612,"F",3
0.348918777124474,1.95766948269957,"U",4
0.52667811182958,0.0168862169880919,"Y",5
-0.804765331279075,-0.186456470768865,"I",6
0.11411203100637,-0.149340801708981,"Q",7
-0.952836952412902,-1.68807271639322,"Q",8
-0.373528919496876,0.750994450392907,"F",9
$ head yy.csv
-0.106007607975644E1,F
0.385395589205671E0,S
0.722392598488791E-1,Q
0.119773830827963E1,Q
-0.151090510772458E1,W
-0.74978830916084E0,G
-0.98888121700762E-1,M
0.965827615823707E0,S
-0.165311094671424E1,F
0.523818819076447E0,R
Any help would be much appreciated...
You don't need that elasticsearch filter, just specify the fields you want in your CSV in the csv output like you did and you should be fine. The fields you need in your CSV are already contained in the event, you simply need to list them in the fields list of the csv output, simply as that.
Concretely, your config file should look like this:
$ cat yy.conf
input {
elasticsearch {
hosts => "localhost:9200"
index => "temp_index"
}
}
filter {
}
output {
stdout { codec => dots }
csv {
fields => ['hh','hh3']
path => '/home/username/yy.csv'
}
}

Changing the elasticsearch host in logstash 1.3.3 web interface

I followed the steps in this document and I was able to do get some reports on the Shakespeare data.
I want to do the same thing with elastic search remotely installed.I tried configuring the "host" in config file but the queries still run on host as opposed to remote .This is my config file
input {
stdin{
type => "stdin-type" }
file {
type => "accessLog"
path => [ "/Users/akushe/Downloads/requests.log" ]
}
}
filter {
grok {
match => ["message","%{COMMONAPACHELOG} (?:%{INT:responseTime}|-)"]
}
kv {
source => "request"
field_split => "&?"
}
if [lng] {
kv {
add_field => [ "location" , ["%{lng}","%{lat}"]]
}
}else if [lon] {
kv {
add_field => [ "location" , ["%{lon}","%{lat}"]]
}
}
}
output {
elasticsearch {
host => "slc-places-qa-es3001.slc.where.com"
port => 9200
}
}
You need to add protocol => http in to make it use HTTP transport rather than joining the cluster using multicast.

Resources