logstash extract and move nested fields into new parent field - elasticsearch

If in my log I print the latitude and longitude of a given point, how can I capture this information so that it is processed as a geospatial data in elastic search?
Below I show an example of a document in Elasticsearch corresponding to a log line:
{
"_index": "memo-logstash-2018.05",
"_type": "doc",
"_id": "DDCARGMBfvaBflicTW4-",
"_version": 1,
"_score": null,
"_source": {
"type": "elktest",
"message": "LON: 12.5, LAT: 42",
"#timestamp": "2018-05-09T10:44:09.046Z",
"host": "f6f9fd66cd6c",
"path": "/usr/share/logstash/logs/docker-elk-master.log",
"#version": "1"
},
"fields": {
"#timestamp": [
"2018-05-09T10:44:09.046Z"
]
},
"highlight": {
"type": [
"#kibana-highlighted-field#elktest#/kibana-highlighted-field#"
]
},
"sort": [
1525862649046
]
}

You can first separate LON and LAT into their own fields as follows,
grok {
match => {"message" => "LON: %{NUMBER:LON}, LAT: %{NUMBER:LAT}"}
}
once they are separated you can use mutate filter to create a parent field around them, like this,
filter {
mutate {
rename => { "LON" => "[location][LON]" }
rename => { "LAT" => "[location][LAT]" }
}
}
let me know if this helps.

Related

how to skip particular fields from inserting in elasticsearch? Don't want to fields like "message", "event", "log"

In the record, I am not inserting fields like "message", "event", or "log". These fields are autogenerated while inserting records from CSV file using logstash somehow which I would like not to get there.
The record in the index looks like follows:
"_index": "jmeter2",
"_id": "dsfdsfdsf",
"_score": 1,
"_source": {
"Samples": "1083",
"Received KB/sec": "178.9",
"99th pct": "1350",
"log": {
"file": {
"path": "/Users/abc/Downloads/opt/jenkins/workspace/agg_report2.csv"
}
},
"host": {
"name": "dfdsfdsffs"
},
"#timestamp": "2022-11-22T07:15:29.052181Z",
"95th pct": "659",
"Min": "112",
"Max": "3829",
"#version": "1",
"Throughput": "7.2",
"Label": "ACTIVITY_DETAIL",
"90th pct": "338",
"Build_number": "abcd1111",
"Error %": "0.00%",
"Median": "207",
"message": "ACTIVITY_DETAIL,1083,270,207,338,659,1350,112,3829,0.00%,7.2,178.9,251.61",
"event": {
"original": "ACTIVITY_DETAIL,1083,270,207,338,659,1350,112,3829,0.00%,7.2,178.9,251.61"
},
"Average Response Time": "270",
"Stddev": "251.61"
}
}
You can add a remove_field statement to your csv filter :
filter {
csv {
remove_field => [ "message", "event", "log" ]
}
}
https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-remove_field

Which field did find the search query?

ı want to find a field, Which field did find the search query?
this can be any query I am not writing a specific query
for example
ı searching dilo abinin phrase or any word, and found bellow document
{
"name":"dilo abinin",
"surname: "sürücü"
}
ı want to get name keyword
You can use highlighting, to see which field matched your query
Index API
{
"name":"dilo abinin",
"surname": "sürücü"
}
Search Query:
{
"query": {
"query_string": {
"query": "dilo abinin"
}
},
"highlight": {
"fields": {
"*": {}
}
}
}
Search Result:
"hits": [
{
"_index": "65325154",
"_type": "_doc",
"_id": "1",
"_score": 0.5753642,
"_source": {
"name": "dilo abinin",
"surname": "sürücü"
},
"highlight": {
"name": [ // note this
"<em>dilo</em> <em>abinin</em>"
],
"name.keyword": [
"<em>dilo abinin</em>"
]
}
}
]

search first element of a multivalue text field in elasticsearch

I want to search first element of array in documents of elasticsearch, but I can't.
I don't find it that how can I search.
For test, I created new index with fielddata=true, but I still didn't get the response that I wanted
Document
"name" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
Values
name : ["John", "Doe"]
My request
{
"query": {
"bool" : {
"must" : {
"script" : {
"script" : {
"source": "doc['name'][0]=params.param1",
"params" : {
"param1" : "john"
}
}
}
}
}
}
}
Incoming Response
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [name] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
You can use the following script that is used in a search request to return a scripted field:
{
"script_fields": {
"firstElement": {
"script": {
"lang": "painless",
"inline": "params._source.name[0]"
}
}
}
}
Search Result:
"hits": [
{
"_index": "stof_64391432",
"_type": "_doc",
"_id": "1",
"_score": 1.0,
"fields": {
"firstElement": [
"John" <-- note this
]
}
}
]
You can use a Painless script to create a script field to return a customized value for each document in the results of a query.
You need to use equality equals operator '==' to COMPARE two
values where the resultant boolean type value is true if the two
values are equal and false otherwise in the script query.
Adding a working example with index data, mapping, search query, and search result
Index Mapping:
{
"mappings":{
"properties":{
"name":{
"type":"text",
"fielddata":true
}
}
}
}
Index data:
{
"name": [
"John",
"Doe"
]
}
Search Query:
{
"script_fields": {
"my_field": {
"script": {
"lang": "painless",
"source": "params['_source']['name'][0] == params.params1",
"params": {
"params1": "John"
}
}
}
}
}
Search Result:
"hits": [
{
"_index": "test",
"_type": "_doc",
"_id": "1",
"_score": 1.0,
"fields": {
"my_field": [
true <-- note this
]
}
}
]
Arrays of objects do not work as you would expect: you cannot query
each object independently of the other objects in the array. If you
need to be able to do this then you should use the nested data type
instead of the object data type.
You can use the script as shown in my another answer if you want to just compare the value of the first element of the array to some other value. But based on your comments, it looks like your use case is quite different.
If you want to search the first element of the array you need to convert your data, into nested form. Using arrays of object at search time you can’t refer to “the first element” or “the last element”.
Adding a working example with index data, mapping, search query, and search result
Index Mapping:
{
"mappings": {
"properties": {
"name": {
"type": "nested"
}
}
}
}
Index Data:
{
"booking_id": 2,
"name": [
{
"first": "John Doe",
"second": "abc"
}
]
}
{
"booking_id": 1,
"name": [
{
"first": "Adam Simith",
"second": "John Doe"
}
]
}
{
"booking_id": 3,
"name": [
{
"first": "John Doe",
"second": "Adam Simith"
}
]
}
Search Query:
{
"query": {
"nested": {
"path": "name",
"query": {
"bool": {
"must": [
{
"match_phrase": {
"name.first": "John Doe"
}
}
]
}
}
}
}
}
Search Result:
"hits": [
{
"_index": "test",
"_type": "_doc",
"_id": "2",
"_score": 0.9400072,
"_source": {
"booking_id": 2,
"name": [
{
"first": "John Doe",
"second": "abc"
}
]
}
},
{
"_index": "test",
"_type": "_doc",
"_id": "3",
"_score": 0.9400072,
"_source": {
"booking_id": 3,
"name": [
{
"first": "John Doe",
"second": "Adam Simith"
}
]
}
}
]

How to turn an array of object to array of string while reindexing in elasticsearch?

Let say the source index have a document like this :
{
"name":"John Doe",
"sport":[
{
"name":"surf",
"since":"2 years"
},
{
"name":"mountainbike",
"since":"4 years"
},
]
}
How to discard the "since" information so once reindexed the object will contain only sport names? Like this :
{
"name":"John Doe",
"sport":["surf","mountainbike"]
}
Note that it would be fine if the resulting field keep the same name, but it's not mandatory.
I don't know which version of elasticsearch you're using, but here is a solution based on pipelines, introduced with ingest nodes in ES v5.0.
1) A script processor is used to extract the values from each subobject and set it in another field (here, sports)
2) The previous sport field is removed with a remove processor
You can use the Simulate pipeline API to test it :
POST _ingest/pipeline/_simulate
{
"pipeline": {
"description": "random description",
"processors": [
{
"script": {
"lang": "painless",
"source": "ctx.sports =[]; for (def item : ctx.sport) { ctx.sports.add(item.name) }"
}
},
{
"remove": {
"field": "sport"
}
}
]
},
"docs": [
{
"_index": "index",
"_type": "doc",
"_id": "id",
"_source": {
"name": "John Doe",
"sport": [
{
"name": "surf",
"since": "2 years"
},
{
"name": "mountainbike",
"since": "4 years"
}
]
}
}
]
}
which outputs the following result :
{
"docs": [
{
"doc": {
"_index": "index",
"_type": "doc",
"_id": "id",
"_source": {
"name": "John Doe",
"sports": [
"surf",
"mountainbike"
]
},
"_ingest": {
"timestamp": "2018-07-12T14:07:25.495Z"
}
}
}
]
}
There may be a better solution, as I've not used pipelines a lot, or you could make this with Logstash filters before submitting the documents to your Elasticsearch cluster.
For more information about the pipelines, take a look at the reference documentation of ingest nodes.

How to extract feature from the Elasticsearch _source to index

I have used logstash,elasticsearch and kibana to collect logs.
The log file is json which like this:
{"_id":{"$oid":"5540afc2cec7c68fc1248d78"},"agentId":"0000000BAB39A520","handler":"SUSIControl","sensorId":"/GPIO/GPIO00/Level","ts":{"$date":"2015-04-29T09:00:00.846Z"},"vHour":1}
{"_id":{"$oid":"5540afc2cec7c68fc1248d79"},"agentId":"0000000BAB39A520","handler":"SUSIControl","sensorId":"/GPIO/GPIO00/Dir","ts":{"$date":"2015-04-29T09:00:00.846Z"},"vHour":0}
and the code I have used in logstash:
input {
file {
type => "log"
path => ["/home/data/1/1.json"]
start_position => "beginning"
}
}
filter {
json{
source => "message"
}
}
output {
elasticsearch { embedded => true }
stdout { codec => rubydebug }
}
then the output in elasticsearch is :
{
"_index": "logstash-2015.06.29",
"_type": "log",
"_id": "AU5AG7KahwyA2bfnpJO0",
"_version": 1,
"_score": 1,
"_source": {
"message": "{"_id":{"$oid":"5540afc2cec7c68fc1248d7c"},"agentId":"0000000BAB39A520","handler":"SUSIControl","sensorId":"/GPIO/GPIO05/Dir","ts":{"$date":"2015-04-29T09:00:00.846Z"},"vHour":1}",
"#version": "1",
"#timestamp": "2015-06-29T16:17:03.040Z",
"type": "log",
"host": "song-Lenovo-IdeaPad",
"path": "/home/song/soft/data/1/Average.json",
"_id": {
"$oid": "5540afc2cec7c68fc1248d7c"
},
"agentId": "0000000BAB39A520",
"handler": "SUSIControl",
"sensorId": "/GPIO/GPIO05/Dir",
"ts": {
"$date": "2015-04-29T09:00:00.846Z"
},
"vHour": 1
}
}
But the information in the json file all in the _source not index
so that i can't use kibana to analysis them.
the kibana shows that Analysis is not available for object fields.
the _source is object fields
how to solve this problem?

Resources