How to setup location as a geo_point in elasticsearch? - elasticsearch

I've been running into this issue where I get failed to find geo_point field [location]
Here is my flow.
Import csv
input {
file {
path => "test.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
separator => ","
#zip,lat, lon
columns => [ "zip" , "lat", "lon"]
}
mutate {
convert => { "zip" => "integer" }
convert => { "lon" => "float" }
convert => { "lat" => "float" }
}
mutate {
rename => {
"lon" => "[location][lon]"
"lat" => "[location][lat]"
}
}
mutate { convert => { "[location]" => "float" } }
}
output {
elasticsearch {
hosts => "cluster:80"
index => "data"
}
stdout {}
}
Test records
GET data
"hits": [
{
"_index": "data",
"_type": "logs",
"_id": "AVvQcOfXUojnX",
"_score": 1,
"_source": {
"zip": 164283216,
"location": {
"lon": 71.34,
"lat": 40.12
}
}
},
...
If I try to run a geo_distance query I get failed to find geo_point field [location]
Then I try to run
PUT data
{
"mappings": {
"location": {
"properties": {
"pin": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
}
}
}
but I get index [data/3uxAJ4ISKy_NyVDNC] already exists
How to I convert location into a geo_point so I can run the query on it?
edit:
I tried planting a template before i index anything, but still same errors
PUT _template/template
{
"template": "base_map_template",
"order": 1,
"settings": {
"number_of_shards": 1
},
"mappings": {
"node_points": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
}

You need to name your template data instead of base_map_template since this is how your index is named. Also the type name needs to be logs instead of node_points:
PUT _template/template
{
"template": "data", <--- change this
"order": 1,
"settings": {
"number_of_shards": 1
},
"mappings": {
"logs": { <--- and this
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
}

Related

ElasticSearch query nested path filter OR

I have following index:
PUT /ab11
{
"mappings": {
"properties": {
"product_id": {
"type": "keyword"
},
"data": {
"type": "nested",
"properties": {
"p_id": {
"type": "keyword"
}
}
}
}
}
}
PUT /ab11/_doc/1
{
"product_id": "123",
"data": [
{
"p_id": "a"
},
{
"p_id": "b"
},
{
"p_id": "c"
}
]
}
I want to do query like following sql does(NOTE: I want to do filter not query, because I don't care about score) :
select * from abc11 where data.pid = "a" or data.pid = "b"
You can do it like this because the terms query has OR semantics by default:
{
"query": {
"nested": {
"path": "data",
"query": {
"terms": {
"data.p_id": [
"a",
"b"
]
}
}
}
}
}
Basically, select all documents which have either "a" or "b" in their data.p_id nested docs.

search array of strings by partially match in elasticsearch

I got fields like that:
names: ["Red:123", "Blue:45", "Green:56"]
it's mapping is
"names": {
"type": "keyword"
},
how could I search like this
{
"query": {
"match": {
"names": "red"
}
}
}
to get all the documents where red is in element of names array?
Now it works only with
{
"query": {
"match": {
"names": "red:123"
}
}
}
You can add multi fields OR just change the type to text, to achieve your required result
Index Mapping using multi fields
{
"mappings": {
"properties": {
"names": {
"type": "text",
"fields": {
"raw": {
"type": "keyword"
}
}
}
}
}
}
Adding a working example with index data, mapping, search query, and search result
Index Mapping:
{
"mappings":{
"properties":{
"names":{
"type":"text"
}
}
}
}
Index Data:
{
"names": [
"Red:123",
"Blue:45",
"Green:56"
]
}
Search Query:
{
"query": {
"match": {
"names": "red"
}
}
}
Search Result:
"hits": [
{
"_index": "64665127",
"_type": "_doc",
"_id": "1",
"_score": 0.2876821,
"_source": {
"names": [
"Red:123",
"Blue:45",
"Green:56"
]
}
}
]

How enable nested mapping?

using ES 6 and not being able to set my mapping correctly.
I have this doc:
{
"_index": "entries_1",
"_type": "elasticsearch-record",
"_id": "3684",
"_score": 5.355921,
"_source": {
"title": "My Title",
"result": {
"autor": [
"fernando-fernandes"
]
}
}
}
And my mapping:
{
"craft-entries_1": {
"mappings": {
"elasticsearch-record": {
"properties": {
"result": {
"type": "nested",
"enabled": false
}
}
}
}
}
}
And I can't query results.autor with this:
{
"query": {
"bool" : {
"must" : [
{ "term": { "result.autor": "fernando-fernandes" } }
]
}
}
}
I've tried PUT this, but seems has no effect on mapping at all, even after I query again my mapping still appears as enabled:false, maybe I should mapping as object?
{
"properties": {
"result.autor": {
"type": "nested",
"enabled": true
}
}
}
What i'm missing?
Your source document is not designed properly according to your mapping, it should be like this:
{
"title": "My Title",
"result": [
{
"autor": "fernando-fernandes"
}
]
}
Since result is nested, it should be modeled as an array with elements inside.
So, delete your index, recreate it with your following mapping (you need to remove enabled: false as that's only for object types)
PUT craft-entries_1
{
"mappings": {
"elasticsearch-record": {
"properties": {
"result": {
"type": "nested"
}
}
}
}
}
Finally, index your documents as I showed above, and then your query will work.

Logstash csv import - mutate add_field if not empty

I'm using logstash to import data from csv files into our elasticsearch.
During the import I want to create a new field that has values from two other fields. Here's a snippet of my import:
input {
file {
path => "/data/xyz/*.csv"
start_position => "beginning"
ignore_older => 0
sincedb_path => "/dev/null"
}
}
filter {
if [path] =~ "csv1" {
csv {
separator => ";"
columns =>
[
"name1",
"name2",
"name3",
"ID"
]
}
mutate {
add_field => {
"searchfield" => "%{name1} %{name2} %{name3}"
}
}
}
output {
if [path] =~ "csv1" {
elasticsearch {
hosts => "localhost"
index => "my_index"
document_id => "%{ID}"
}
}
}
}
This works as desired but on rows where for example name3 is empty, logstash writes %{name3} into the new field. Is there a way to only add the value if it's not empty?
I think there's no other way other than checking if name3 is present and based on that, build your search field.
if [name3] {
mutate {
id => "with-name3"
add_field => { "searchfield" => "%{name1} %{name2} %{name3}" }
}
} else {
mutate {
id => "without-name3"
add_field => { "searchfield" => "%{name1} %{name2}" }
}
}
Alternatively, if I understand your issue right, you obviously want to ship this data to Elasticsearch and want to have a single searchable field. In order to avoid data duplication in your source, you can build a search field by using copy_to statement. Your mappings would look as follows:
{
"mappings": {
"doc": {
"properties": {
"name1": {
"type": "text",
"copy_to": "searchfield"
},
"name2": {
"type": "text",
"copy_to": "searchfield"
},
"name3": {
"type": "text",
"copy_to": "searchfield"
},
"searchfield": {
"type": "text"
}
}
}
}
}
and then you can perfectly run your queries against that field without having duplicates in source.
Update. Basically your logstash.conf would look as follows:
input {
file {
path => "/data/xyz/*.csv"
start_position => "beginning"
ignore_older => 0
sincedb_path => "/dev/null"
}
}
filter {
if [path] =~ "csv1" {
csv {
separator => ";"
columns => ["name1", "name2", "name3", "ID"]
}
}
}
output {
if [path] =~ "csv1" {
elasticsearch {
hosts => "localhost"
index => "my_index"
document_id => "%{ID}"
}
}
}
Then create elasticsearch index using the following:
PUT /my_index/
{
"mappings": {
"doc": {
"properties": {
"name1": {
"type": "text",
"copy_to": "searchfield"
},
"name2": {
"type": "text",
"copy_to": "searchfield"
},
"name3": {
"type": "text",
"copy_to": "searchfield"
},
"searchfield": {
"type": "text"
}
}
}
}
}
And then you can run search as follows:
GET /my_index/_search
{
"query": {
"match": {
"searchfield": {
"query": "your text"
}
}
}
}

How to exclude inherited object properties from mappings

I'm trying to setup a mapping for an object that looks like this:
class TestObject
{
public long TestID { get; set; }
[ElasticProperty(Type = FieldType.Object)]
public Dictionary<long, List<DateTime>> Items { get; set; }
}
I use the following mapping code (where Client is IElasticClient):
this.Client.Map<TestObject>(m => m.MapFromAttributes());
I get the following mapping result:
{
"mappings": {
"testobject": {
"properties": {
"items": {
"properties": {
"comparer": {
"type": "object"
},
"count": {
"type": "integer"
},
"item": {
"type": "date",
"format": "dateOptionalTime"
},
"keys": {
"properties": {
"count": {
"type": "integer"
}
}
},
"values": {
"properties": {
"count": {
"type": "integer"
}
}
}
}
},
"testID": {
"type": "long"
}
}
}
}
This becomes a problem when I want to do a search like this:
{
"query_string": {
"query": "[2015-06-03T00:00:00.000 TO 2015-06-05T23:59:59.999]",
"fields": [
"items.*"
]
}
}
This causes exceptions, that I guess are because of all the fields in the items object are not of the same type. What is the proper mapping to searches of this type?
I was able to fix this by using the following mapping:
this.Client.Map<TestObject>(m => m.MapFromAttributes())
.Properties(p => p
.Object<Dictionary<long, List<DateTime>>>(o => o.Name("items")));

Resources