elasticsearch message field value - elasticsearch

I am sending json messages to logstash getting indexed by elasticsearch and managed to setup the UI dashboard in Kibana. I would like to filter the data by the message fields and cannot figure out how or where to do this. An example of my message:
{"message":"{"pubDate":"2014-02-25T13:09:14",
"scrapeDate":"2014-02-5T13:09:26",
"Id":"78967",
"query":"samsung S5",
"lang":"en"}
Right now it counts all these messages coming in but I need to get each message filtered by the fields itself for example like Id or lang or query.
Does this have to be done in the config file or can it be created in Kibana interface.

First, I assume your json messages is
{
"pubDate":"2014-02-25T13:09:14",
"scrapeDate":"2014-02-5T13:09:26",
"Id":"78967",
"query":"samsung S5",
"lang":"en"
}
When you send your message to logstash, you need to specify the codec to json. As show in the configuration below:
input {
stdin {
codec => json
}
}
output {
elasticsearch {
cluster => "abc"
}
}
Logstash will parsing your message to different field, like the output:
{
"pubDate" => "2014-02-25T13:09:14",
"scrapeDate" => "2014-02-5T13:09:26",
"Id" => "78967",
"query" => "samsung S5",
"lang" => "en",
"#version" => "1",
"#timestamp" => "2014-02-26T01:36:15.336Z",
"host" => "AAAAAAAAAA"
}
When you show this data in Kibana, You can use fieldname:value to query and filter what you need. For example, you can query all message with lang:en.

Related

update elastic-search document with the same ID

everyone. I'm new in elk and I have a question about logstash.
I have some services and each one has 4 or 6 logs; it means a doc in elastic may has 4 or 6 logs.
I want to read these logs and if they have the same id, put them in one elastic doc.
I must specify that all of the logs have a unique "id" and each request and every log that refers to that request has the same id. each log has a specific type.
I want to put together every log that has the same id and type; like this:
{
"_id":"123",
"Type1":{},
"Type2":[{},{}],
"Type3":[{},{}],
"Type4":{}
}
Every log for the same requset:
Some of them must be in the same group. because their type are the same. look example above. Type2 is Json Array and has 2 jsons. I want to use logstash to read every log and have them classified.
Imagine that our doc is like bellow JSON at the moment:
{
"_id": "123",
"Type1":{},
"Type2":[{},{}],
"Type3":{}
}
now a new log arrives, with id 123 and it's type is Type4. The doc must update like this:
{
"_id": "123",
"Type1":{},
"Type2":[{},{}],
"Type3":{},
"Type4":{}
}
again, I have new log with id, 123 and type, Type3. the doc update like this:
{
"_id": "123",
"Type1":{},
"Type2":[{},{}],
"Type3":[{},{}],
"Type4":{}
}
I tried with script, but I didn't succeed. :
{
"id": 1,
"Type2": {}
}
The script is:
input {
stdin {
codec => json_lines
}
}
output {
elasticsearch {
hosts => ["XXX.XXX.XXX.XXX:9200"]
index => "ss"
document_id => "%{requestId}"
action => "update" # update if possible instead of overwriting
document_type => "_doc"
script_lang => "painless"
scripted_upsert => true
script_type => "inline"
script => 'if (ctx._source.Type3 == null) { ctx._source.Type3 = new ArrayList() } if(!ctx._source.Type3.contains("%{Type3}")) { ctx._source.Type3.add("%{Type3}")}'
}
}
now my problem is this script format just one type; if it works for multiple types, what would it look like?
there is one more problem. I have some logs that they don't have an id, or they have an id, but don't have a type. I want to have these logs in the elastic, what should I do?
You can have a look on aggregate filter plugin for logstash. Or as you mentioned if some of the logs don't have an id, then you can use fingerprint filter plugin to create an id, which you can use to update document in elasticsearch.
E.g:
input {
stdin {
codec => json_lines
}
}
filter {
fingerprint {
source => "message"
target => "[#metadata][id]"
method => "MURMUR3"
}
}
output {
elasticsearch {
hosts => ["XXX.XXX.XXX.XXX:9200"]
index => "ss"
document_id => "%{[#metadata][id]}"
action => "update" # update if possible instead of overwriting
}
}

Duplicate field values for grok-parsed data

I have a filebeat that captures logs from uwsgi application running in docker. The data is sent to the logstash which parses it and forwards to elasticsearch.
Here is the logstash conf file:
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "log" => "\[pid: %{NUMBER:worker.pid}\] %{IP:request.ip} \{%{NUMBER:request.vars} vars in %{NUMBER:request.size} bytes} \[%{HTTPDATE:timestamp}] %{URIPROTO:request.method} %{URIPATH:request.endpoint}%{URIPARAM:request.params}? => generated %{NUMBER:response.size} bytes in %{NUMBER:response.time} msecs(?: via sendfile\(\))? \(HTTP/%{NUMBER:request.http_version} %{NUMBER:response.code}\) %{NUMBER:headers} headers in %{NUMBER:response.size} bytes \(%{NUMBER:worker.switches} switches on core %{NUMBER:worker.core}\)" }
}
date {
# 29/Oct/2018:06:50:38 +0700
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z"]
}
kv {
source => "request.params"
field_split => "&?"
target => "request.query"
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "test-index"
}
}
Everything was fine, but I've noticed that all values captured by the grok pattern is duplicated. Here is how it looks in kibana:
Note that the raw data like log which wasn't grok output is fine. I've seen that kv filter has allow_duplicate_values parameter, but it doesn't apply to grok.
What is wrong with my configuration? Also, is it possible to rerun grok patterns on existing data in elasticsearch?
Maybe your filebeat is already doing the job and creating these fields
Did you try to add this parameter to your grok ?
overwrite => [ "request.ip", "request.endpoint", ... ]
In order to rerun grok on already indexed data you need to use elasticsearch input plugin in order to read data from ES and re-index it after grok.

CSV response from get request in elastic search

I am sending an http Get request to elastic search server and i want the response to be in csv format.Like in solr we can specify wt=csv is there any way In elastic Search too ?
My query is :
enter code here
http://elasticServer/_search?q=RCE:"some date" OR
VENDOR_NAME:"Anuj"&from=0&size=5&sort=#timestamp
-----After that i want to force the server to return me response in csv format
By default, ES supports only two data formats: JSON and YAML. However, if you're open to using Logstash, you can achieve what you want very easily like this:
input {
elasticsearch {
hosts => ["localhost:9200"]
query => 'RCE:"some date" OR VENDOR_NAME:"Anuj"'
size => 5
}
}
filter {}
output {
csv {
fields => ["field1", "field2", "field3"]
path => "/path/to/data.csv"
}
}
Since the elasticsearch input uses scrolling, you cannot specify any sorting. So if sorting is really important to you, you can use the http_poller input instead of the elasticsearch one, like this:
input {
http_poller {
urls => {
es => {
method => get
url => 'http://elasticServer/_search?q=RCE:"some date" OR VENDOR_NAME:"Anuj"&from=0&size=5&sort=#timestamp'
headers => {
Accept => "application/json"
}
}
}
codec => "json"
}
}
filter {}
output {
csv {
fields => ["field1", "field2", "field3"]
path => "/path/to/data.csv"
}
}
There is a ElasticSearch plugin on Github called Elasticsearch Data Format Plugin that should satisfy your requirements.

Logstash Filtering and Parsing Dies Output

Environment
Ubuntu 16.04
Logstash 5.2.1
ElasticSearch 5.1
I've configured our Deis platform to send logs to our Logstack node with no issues. However, I'm still new to Ruby and Regexes are not my strong suit.
Log Example:
2017-02-15T14:55:24UTC deis-logspout[1]: 2017/02/15 14:55:24 routing all to udp://x.x.x.x:xxxx\n
Logstash Configuration:
input {
tcp {
port => 5000
type => syslog
codec => plain
}
udp {
port => 5000
type => syslog
codec => plain
}
}
filter {
json {
source => "syslog_message"
}
}
output {
elasticsearch { hosts => ["foo.somehost"] }
}
Elasticsearch output:
"#timestamp" => 2017-02-15T14:55:24.408Z,
"#version" => "1",
"host" => "x.x.x.x",
"message" => "2017-02-15T14:55:24UTC deis-logspout[1]: 2017/02/15 14:55:24 routing all to udp://x.x.x.x:xxxx\n",
"type" => "json"
Desired outcome:
"#timestamp" => 2017-02-15T14:55:24.408Z,
"#version" => "1",
"host" => "x.x.x.x",
"type" => "json"
"container" => "deis-logspout"
"severity level" => "Info"
"message" => "routing all to udp://x.x.x.x:xxxx\n"
How can I extract the information out of the message into their individual fields?
Unfortunately your assumptions about what you are trying to do is slightly off, but we can fix that!
You created a regex for JSON, but you are not parsing JSON. You are simply parsing a log that is bastardized syslog (see syslogStreamer in the source), but is not in fact syslog format (either RFC 5424 or 3164). Logstash afterwards provides JSON output.
Let's break down the message, which becomes the source that you parse. The key is you have to parse the message front to back.
Message:
2017-02-15T14:55:24UTC deis-logspout[1]: 2017/02/15 14:55:24 routing all to udp://x.x.x.x:xxxx\n
2017-02-15T14:55:24UTC: Timestamp is a common grok pattern. This mostly follows TIMESTAMP_ISO8601 but not quite.
deis-logspout[1]: This would be your logsource, which you can name container. You can use the grok pattern URIHOST.
routing all to udp://x.x.x.x:xxxx\n: Since the message for most logs is contained at the end of the message, you can just then use the grok pattern GREEDYDATA which is the equivalent of .* in a regular expression.
2017/02/15 14:55:24: Another timestamp (why?) that doesn't match common grok patterns.
With grok filters, you can map a syntax (abstraction from regular expressions) to a semantic (name for the value that you extract). For example %{URIHOST:container}
You'll see I did some hacking together of the grok filters to make the formatting work. You have match parts of the text, even if you don't intend to capture the results. If you can't change the formatting of the timestamps to match standards, create a custom pattern.
Configuration:
input {
tcp {
port => 5000
type => deis
}
udp {
port => 5000
type => deis
}
}
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}(UTC|CST|EST|PST) %{URIHOST:container}\[%{NUMBER}\]: %{YEAR}/%{MONTHNUM}/%{MONTHDAY} %{TIME} %{GREEDYDATA:msg}" }
}
}
output {
elasticsearch { hosts => ["foo.somehost"] }
}
Output:
{
"container" => "deis-logspout",
"msg" => "routing all to udp://x.x.x.x:xxxx",
"#timestamp" => 2017-02-22T23:55:28.319Z,
"port" => 62886,
"#version" => "1",
"host" => "10.0.2.2",
"message" => "2017-02-15T14:55:24UTC deis-logspout[1]: 2017/02/15 14:55:24 routing all to udp://x.x.x.x:xxxx",
"timestamp" => "2017-02-15T14:55:24"
"type" => "deis"
}
You can additionally mutate the item to drop #timestamp, #host, etc. as these are provided by Logstash by default. Another suggestion is to use the date filter to convert any timestamps found into usable formats (better for searching).
Depending on the log formatting, you may have to slightly alter the pattern. I only had one example to go off of. This also maintains the original full message, because any field operations done in Logstash are destructive (they overwrite values with fields of the same name).
Resources:
Grok
Grok Patterns
Grok Debugger

Logstash data showing up as "message" field in elasticsearch

I am trying to send some raw data to elasticsearch through logstash. I am trying to do this through the udp plugin but for now I dont think this is relevant.
Basically, I with to send key/value pairs, and I wish for this to show up as:
{
"key_1": "value_1"
....
}
instead of:
{
"message": "{\"key1\": \"value1\"}"
}
Is there any way for logstash to somehow "decode" the message as json and insert them as top level keys?
Thanks
I just needed to use a "json" codec on the input like so:
input {
udp {
port => 3425
codec => "json"
}
}
Thanks to Val for pointing this out

Resources