Kibana displaying JSON incorrectly - elasticsearch

I'm using ELK (Elasticsearch, Logstash, Kibana) for logging purposes. The problem is Kibana doesn't seem to recognize my JSON, because it puts my JSON inside message.
Here's how I run Logstash:
bin/logstash -e 'input { udp { port => 5000 type => json_logger } }
output { stdout { } elasticsearch { host => localhost } }'
Here's an example Logstash output for my logs (for debugging purposes I also output logs to stdout):
2014-10-07T10:28:19.104+0000 127.0.0.1
{"user_id":1,"object_id":6,"#timestamp":"2014-10-07T13:28:19.101+03:00","#version":"1","severity":"INFO","host":"sergey-System"}
How do I make Elasticsearch/Kibana/Logstash recognize JSON?

Try bin/logstash -e 'input { udp { port => 5000 type => json_logger codec => json} } output { stdout { } elasticsearch { host => localhost } }'.
Note the codec => json option.

Related

Logstash UDP issue

I have simple logstash config where I am routing(shipping) logs to secondary logstash based on the beat host name, below is the primary/secondary logstash config. I masked the IP here, but primary logstash is not shipping any logs to the secondary.
1) Is it the if condition needs to have [host] or [agent][hostname] in primary logstash?
2) On secondary do I need to receive for UDP or TCP ?
I tried to check netstat o/p on primary and secondary. Its not showing the any results on the primary node where as showing at the secondary node.
sudo netstat -ulpat | grep 8011
My main logstash config
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://<ip1>:9200","http://<ip2>:9200","http://<ip3>:9200"]
}
}
# stdout { codec => rubydebug }
}
output {
if [host] =~ "moaf-iws" {
udp {
host => "XX.XXX.XXX.XXX"
port => "8011"
}
}
}
Secondary log stash
input {
beats {
port => 5044
}
}
input {
udp {
port => 8011
}
}
output {
elasticsearch {
hosts => ["http://<ip1>:9200","http://<ip2>:9200"]
}
}
stdout { codec => rubydebug }

how to java application to logstash?

I want to send data from java application to logstash.
Sending it to " curl " is good, but it does not send it to "java restTemplate"
"curl" example : OK
$ curl -XPOST -H "Content-Type: application/x-ndjson" "http://10.97.8.151:18080" --data-binary #data.txt
data.txt
{"index":{"_index": "myIndex","_type":"myType"}}
{"data1":"value1","data2":"value2","data3":"value3"}
This works well. However, " Java restTemplate " did not work as follows.
Everything else is identical and the data form is different.
{"index":{"_index": "myIndex","_type":"myType"}}\n{"data1":"value1","data2":"value2","data3":"value3"}
I have tried to describe the type of data in the form of "application/x-ndjson"and I don't know where it was wrong.
How do you transfer data from " java application " to " logstash " in " java application "?
The " logstash config " file is as follows :
input {
http {
host => "0.0.0.0"
port => "12345"
codec => es_bulk {
}
}
}
output {
elasticsearch {
hosts => "x.x.x.x:yyyy"
index => "{[#metadata][_index]}"
document_type => "{[#metadata][_type]}"
template_name => "api"
}
stdout { codec => rubydebug { metadata => true } }
}
Ask for help.
The best way to do this is not to use the es_bulk codec in your http input, but simply let your events be plain text messages.
Then, instead of using an elasticsearch output, you can use the http output like this:
input {
http {
host => "0.0.0.0"
port => "12345"
codec => "plain"
}
}
output {
http {
http_method => "post"
url => "http://x.x.x.x:yyyy/_bulk"
format => "message"
message => "%{message}"
}
}
The big plus of this method is that your Logstash pipeline doesn't have to parse the "bulk" input just to recreate the same "bulk" output. In this case, you're simply using Logstash as a "pass-through". I don't see the value of this, though, you could have your Java application send the bulk payload directly to ES.

how i can create index to elastic search using tcp input protocol?

i have configured logstash 5.5 to use tcp protocol for give the json message.
input {
tcp {
port => 9001
codec => json
type => "test-tcp-1"
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "logstash-%{type}-%{+YYYY.MM.dd}"
}
}
filter{
json { source => "message" }
}
The message has been received from logstash with successfully but elasticsearch not create a index ! Why ?
If use the same configuration with stdin input plugin work fine.
Many thanks.

ELK - GROK Pattern for Winston logs

I have setup local ELK. All works fine, but before trying to write my own GROK pattern I wonder is there already one for Winston style logs?
That works great for Apache style log.
I would need something that works for Winston style. I think JSON filter would do the trick, but I am not sure.
This is my Winston JSON:
{"level":"warn","message":"my message","timestamp":"2017-03-31T11:00:27.347Z"}
This is my Logstash configuration file example:
input {
beats {
port => "5043"
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
}
}
For some reason it is not getting parsed. No error.
Try like this instead:
input {
beats {
port => "5043"
codec => json
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
}
}

Elk Elasticsearch logstash configuration

I'm new in ELK. In fact, I already installed Logstash, elasticsearch, and kibana on ubuntu 14.04. when I try to test ELK with an existing log file on my ubuntu, the logstash didn't load log into elasticsearch and showing nothing. This is my logstash config file : sudo gedit /etc/logstash/conf.d/logstash.conf
input {
file {
path => "/home/chayma/logs/catalina.2016-02-02.log"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{COMMONAPACHELOG}" }
}
}
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
}
stdout
{
codec => rubydebug
}
}
However, my elasticsearch.yml contains:
cluster.name: my-application
node.name: node-1
node.master: true
node.data: true
index.number_of_shards: 1
index.number_of_replicas: 0
network.host: localhost
http.port: 9200
Please help
I presume Logstash and Elasticsearch are installed on same machine and Logstash is running?
sudo service logstash status
Try checking the Logstash log file to see if it's a connection issue or a syntax error (config looks OK, so probably the former):
tail -f /var/log/logstash/logstash.log
Does your COOMONAPACHELOG matches the log pattern that you are trying to parse using GROK ?
By default from the path on Ubuntu 14.04
/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.5/patterns/grok-patterns
You can verify the same here
https://grokdebug.herokuapp.com/
The GROK in our case is applying the following regex:
COMMONAPACHELOG %{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)
Please provide with the log entries.
change your elasticsearch output by adding index name to it and try
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
index => "testindex-%{+YYYY.MM.dd}"
}
stdout
{
codec => rubydebug
}
}
You're missing input {}. input{} and output{} are necessary in logstash pipeline.
input {
file {
path => "/home/chayma/logs/catalina.2016-02-02.log"
start_position => "beginning"
}
}
}
Or you can check simple way whether text can forward to elasticsearch.
Just test with using stdin and stdout in terminal. Be sure local elasticsearch service is running.
input {
stdin {
type => "true"
}
}
filter {
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
}
stdout {
codec => rubydebug
}
}

Resources