Logstash expect # what is wrong - elasticsearch

i want send data from kafka topic "test" to elasticsearch index "twitter" via logstash but my confing don't work
error is
reason=>"Expected one of #, => at line 1, column 101 (byte 101) after
input { kafka { bootstrap_servers=>\"localhost:9092\"
topics=>\"test\"} filter{} output{ elasticsearch "}
input { kafka { bootstrap_servers=>"localhost:9092" topics=>"test"} filter{} output{ elasticsearch {hosts=>["127.0.0.1:9200"]}}

Seems like you're missing out a closing bracket in your input:
input {
kafka {
bootstrap_servers=>"localhost:9092"
topics=>"test"
}
} <---- this bracket was missing in yours
filter{}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
}
}

Related

Can we send data from different url's into different index in elastic search using one single logstash config file

Using http poller as input plugin and elastic search as output plugin can we send data from different urls into different indicies in elasticsearch using one single logstash config file.
Yes you can do it using if conditions in the output section:
input {
http_poller {
...
add_tag => "source1"
}
http_poller {
...
add_tag => "source2"
}
}
filter {
...
}
output {
if "source1" in [tags] {
elasticsearch {
...
index => "index1"
}
}
else if "source2" in [tags] {
elasticsearch {
...
index => "index2"
}
}
}

Using conditionals in Logstash pipeline configuration

I am trying to use Logstash conditionals in a context of pipeline output configuration.
Based on the presence of device field in the payload I'd like to forward the event to the appropriate index name in Elasticsearch:
output {
elasticsearch {
hosts => ["10.1.1.5:9200"]
if [device] ~= \.* {
index => "%{[device][0]}-%{+YYYY.ww}"
} else {
index => "%{[beat][name]}-%{+YYYY.ww}"
}
}
}
The above code would fail with the following mgs in the log indicating the syntax error:
...
"Expected one of #, => at line 14, column 12 (byte 326) after output {\n elasticsearch {\n hosts => [\"10.1.1.5:9200\"]\n if "
...
Can someone please advise?
You should use the conditional before the elasticsearch output, not inside it.
output {
if [device] ~= \.* {
elasticsearch {
hosts => ["10.1.1.5:9200"]
index => "%{[device][0]}-%{+YYYY.ww}"
}
} else {
elasticsearch {
hosts => ["10.1.1.5:9200"]
index => "%{[beat][name]}-%{+YYYY.ww}"
}
}
}

Uncooperative ELK Docker Instance

I have ELK 5.5.1 running in a Docker container, and it'll parse most of my logs, except for ones that originate from my Spring application. Kinda running out of ideas.
I've traced it down to the logstash->elasticsearch pipeline. Filebeat is doing its job, and Logstash is receiving logs from the application in question, based on tailing lostash's stdout log.
I wiped the docker volume that stores my ELK data clean, and started fresh with filebeat just forwarding the logs in question.
Take a log line like this:
FINEST|8384/0|Service tsoft_spring|17-08-31 14:12:01|2017-08-31 14:12:01.260 INFO 8384 --- [ taskExecutor-2] c.t.s.c.s.a.ConfirmationService : Will not persist empty response notes
Using a very minimal logstash configuration, it'll wind up being persisted in elasticsearch:
input {
beats {
port => 5044
ssl => false
}
}
filter {
if [message] =~ /tsoft_spring/ {
grok {
match => [ "message", "%{GREEDYDATA:logmessage}" ]
}
}
}
output {
stdout { }
elasticsearch { hosts => ["localhost:9200"] }
}
Using a more complete configuration, the log is just ignored by elastic, no grokparsefailure, no dateparsefailure:
input {
beats {
port => 5044
ssl => false
}
}
filter {
if [message] =~ /tsoft_spring/ {
grok {
match => [ "message", "%{WORD}\|%{NUMBER}/%{NUMBER}\|%{WORD}%{SPACE}%{WORD}\|%{TIMESTAMP_ISO8601:timestamp}\|%{TIMESTAMP_ISO8601}%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}%{NUMBER:pid}%{SPACE}---%{SPACE}%{SYSLOG5424SD:threadname}%{SPACE}%{JAVACLASS:classname}%{SPACE}:%{SPACE}%{GREEDYDATA:logmessage}" ]
}
date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
}
}
}
output {
stdout { }
elasticsearch { hosts => ["localhost:9200"] }
}
I've checked that this pattern will parse that line, using http://grokconstructor.appspot.com/do/match#result, and I could've sworn it was working last weekend, but could be my imagination.
Maybe the problem here is not in your grok filter, but in the date match. Resulting year is 0017, instead of 2017. Maybe that's why you can't find the event in ES? Can you try this:
date {
match => [ "timestamp" , "yy-MM-dd HH:mm:ss" ]
}

How to send different logstash event to different output

There are many events as fields that in logstash filter section are extracted from message field like below:
match => ["message", "%{type1:f1} %{type2:f2} %{type3:f3}"]
The purpose is to send f1, f2, f3 to one output and only f1 and f3 to other output plugin such that:
output {
elasticsearch {
action => "index"
hosts => "localhost"
index =>"indx1-%{+YYYY-MM}"
.
}
}
output {
elasticsearch {
action => "index"
hosts => "localhost"
index =>"indx2-%{+YYYY-MM}"
}
}
The problem is that all events are involved in every output pluggin but I want to handle which events goes to which output plugin.Is it possible to do this?
I found a solution by using filebeat to forward data to logstash.
If running two instancea of filebeat and one instance of logstash, each filebeat forwarda input data to the same logstash but with different type like:
document_type: type1
In logstash, appropriate filter and output is exceuted using if clause:
filter {
if [type] == "type1" {
}
else {
}
}
output {
if [type] == "type1" {
elasticsearch {
action => "index"
hosts => "localhost"
index => "%{type}-%{+YYYY.MM}"
}
}
else {
elasticsearch {
action => "index"
hosts => "localhost"
index => "%{type}-%{+YYYY.MM}"
}
}
}
If you have two distinct matching patterns in the "filter" section, then you can add specific "tags" for each match. Then in the output section use something like this:
if "matchtype1" in [tags] {
elasticsearch {
hosts => "localhost"
index => "indxtype1-%{+YYYY.MM}"
}
}
if "matchtype2" in [tags]{
elasticsearch {
hosts => "localhost"
index => "indxtype2-%{+YYYY.MM}"
}
}

Parse url parameters in logstash

I'm sending data in a url to logstash 5.2 and I would like to parse it in logstash, so every url parameter becomes a variable in logstash and I can visualize it properly in kibana.
http://127.0.0.1:31311/?id=ID-XXXXXXXX&uid=1-37zbcuvs-izotbvbe&ev=pageload&ed=&v=1&dl=http://127.0.0.1/openpixel/&rl=&ts=1488314512294&de=windows-1252&sr=1600x900&vp=1600x303&cd=24&dt=&bn=Chrome%2056&md=false&ua=Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_11_3)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/56.0.2924.87%20Safari/537.36&utm_source=&utm_medium=&utm_term=&utm_content=&utm_campaign=
This is my logstash conf file:
input
{
http
{
host => "127.0.0.1"
port => 31311
}
}
output
{
elasticsearch
{
hosts => ["localhost:9200"]
}
stdout
{
codec => rubydebug
}
}
You could use the grok filter to match your params in your url as such:
filter {
grok {
match => [ "message", "%{URIPARAM:url}" ]
}
And then you might have to use kv filter in order to split your data:
kv {
source => "url"
field_split => "&"
}
This SO might become handy. Hope this helps!

Resources