I'm using Logstash to process my logs and store them to Elastic Search.
I'm using http as input plugin for my logstash.
My http post request is:
$http.post(url, {type: 'reference error', message: 'y is not defined'});
I would like to store the type and message key as different fields in Elastic Search.
Currently all of the post data is stored as a single field like:
"message":"{\"type\":\"ReferenceError\",\"message\":\"y is not
defined\"}"
I think this can be done using grok filter but I have not been able to find a way to do this.
Any help is highly appreciated.
Thanks.
If you use the json codec, the information should be split out into fields for you automatically.
EDIT:
As Alain mentioned it is the best way to use the json codec which can be set directly in your http input plugin. If that is not possible for some reason you can use the grok filter.
If I understand you correctly your incoming event looks like this:
{"type": "reference error", "message": "y is not defined"}
Then a corresponding grok pattern would look like this:
{"type": %{QUOTEDSTRING:http_type}, "message": %{QUOTEDSTRING:http_message}}
In your logstash configuration:
grok {
match => [ "message", "{\"type\": %{QUOTEDSTRING:http_type}, \"message\": %{QUOTEDSTRING:http_message}}" ]
}
Then the result will have the two fields http_type and http_message.
Related
So currently I'm building a log system using ELK Stack. Before building this ELK, I already have custom log format for my apps, so that it can be easily read by human. My log is formatted something like this
Method: POST
URL: https://localhost:8888/api
Body: {
"field1":"value1",
"field2":[
{
"field3":"value2",
"field4":"value3"
},
{
"field3":"value2",
"field4":"value3"
},
]
}
using grok pattern, I can get the Method and the URL, but how can I get the full body json in grok / logstash so that i can send them to elasticsearch?
Since the length of the json is not fixed and can be longer or shorter each log
Thank you
You can use the JSON Filter.
It should parse the JSON for you, and put it into a structured format so you can then send it where ever you need (e.g. Elasticsearch, another pipeline)
From the docs
It takes an existing field which contains JSON and expands it into an actual data
structure within the Logstash event.
There are also some other questions here on SO that could be helpful. An example: Using JSON with LogStash
the logstash allows to extract patterns via grok filter. My question is how I can use it in sub-sequent filter? For instance, the apache log provides the URI path of the query, something like /path/api?param1=1¶m2. I can extract the whole thing in grok filter and assign to attribute request. Now I want to decompose it into different parts. My question is how I can use request attribute and split it further in order to get /path, api, params? Can someone provide an example?
Thanks,
Valentin.
You can use a second grok filter on a newly created field, like this:
grok {
match => { "request" => Your pattern here }
}
Here is my JSON .
{
"id":100,
"name":"xxx",
"hobbies":["cricket","footbal","singing & dancing"]
}
I need to filter "singing & dancing" string from "others". Executed below query.
http://localhost:9200/employeed/data/_search?q={"query":{"query_string":{"query" : "hobbies:Singing & dancing"}}}
I am getting below exception.
"type": "illegal_argument_exception",
"reason": "request [employee/data/_search] contains unrecognized parameter: [ Singing\"}}}]"
Any help?
You're attempting to send a DSL query in a URI. The two don't mix, see here: https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-search.html
The search API allows you to execute a search query and get back search hits that match the query. The query can either be provided using a simple query string as a parameter, or using a request body.
You can either use the DSL and send it in your request body, or change your URL to just include the hobbies query_string you want to use. Like this:
http://localhost:9200/employeed/data/_search?q=hobbies%3A%5C%22Singing%20%26%20dancing%5C%22
The query_string portion is also URL encoded. I've also added quotes around your search value, otherwise it will OR them together. The unencoded version:
hobbies:\"Singing & dancing\"
So I can successfully do request like:
localhost:9200/filebeat-*/_count?filter_path=-_shards
{"query": {
"match_phrase" : {
"message" : "o hohoho"
}
}}
How I can move filter_path=-_shards into the request body to make it work?
According to the documentation code, still not possible in Elasticsearch 6.2:
All REST APIs accept a filter_path parameter that can be used to
reduce the response returned by Elasticsearch
and it's impossible to include it into the request body, that's just not supported (to be honest, I'm not sure if it will ever be supported).
However, for some scenarios, you could limit response returned by Elasticsearch by using source filtering (unfortunately, it's only applicable to returned fields of documents)
I am very new to filebeat and elasticsearch. I am doing a hobby project and I want to parse my data files. each data files contains the information's as mentioned below format,
<name>
<question>
<ans1>
<ans2>
<ans3>
..etc
I want to read this data and store in es like
{
id : <separate_id_for_each_file>,
name: <name>,
question: <question>,
ans1: <ans1>, ..etc
}
How can I do this with filebeat?
As of now, you can't do this with filebeat.
You will need to send your log to logstash, then transform it usign a plugin like grok and then send it to elastic, if you wish to add a id to the log, you can use something like the uuid plugin before sending it to grok.
Filebeat aims only to be the harvest witch will read your logs and send then forward
So your flow would be something like: filebeat > LOGSTASH[uuid,grok] > ElasticSearch
If you need examples of grok patterns, these can be usefull:
Collection of grok patterns:
https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns
Grok pattern tester:
http://grokconstructor.appspot.com/do/match