Logstash filter with custom geopoint - ruby

i'm trying the following:
I have a custom Logstash filter and within this filter i have latitude and longitude values. I now want to create a new field (Serv_location) that uses the lat and lon values, so i can create a world-map with these geopoints in kibana. My Problem is that when i create the new field, it interprets it as a digit/number field in logstash and not as a therefore needed geopoint field.
currently my code looks like this to add the field:
event['serv_location'] = [geo_lat.to_f, geo_lng.to_f]
what else do i need to do, to create a geopoint field?
Edit:
Here is the mapping i did:
curl -XPUT 'http://localhost:5601/logstash-2015.04.16/_mapping/location' -d '
{
"map_location" :
{
"properties" : {
"location" : {"type" : "geo_point", "store" : true }
}
}
}
Thanks.

Please write custom mapping for this field and define "type" : "geo_point". Hopefully, this will solve your problem

Related

Use number field as date in Kibana date_histogram aggregation

I'm trying to use Visualize feature in Kibana to plot monthly date_histogram graph that counts # of messages in my system. Message type has a sent_at field that is stored as number since epoch time.
Although I can do that just fine with elasticsearch query
POST /_all/message/_search?size=0
{
"aggs" : {
"monthly_message" : {
"date_histogram" : {
"field" : "sent_at",
"interval" : "month"
}
}
}
}
I ran into a problem in Kibana saying No Compatible Fields: The "myindex" index pattern does not contain any of the following field types: date
Is there a way to get Kibana to use number field as date?
Not to my knowledge, Kibana will use the index mapping in order to find out date fields, if no date fields can be found, then Kibana won't be able to infer one from the other number fields.
What you can do is to add another field called sent_at_date to your mapping, then use the update-by-query API in order to copy the sent_at field to that new field and finally to recreate your index pattern in Kibana.
It goes basically like this:
# 1. add a new field to your mapping
PUT myindex/_mapping/message
{
"properties": {
"sent_at_date": {
"type": "date"
}
}
}
# 2. update all your documents
POST myindex/_update_by_query
{
"script": {
"source": "ctx._source.sent_at_date = ctx._source.sent_at"
}
}
And finally recreate your index pattern in Kibana. You should see a new field called sent_at_date of type date that you can use in Kibana.

Changing type of property in index type's mapping

I have index mapping for type 'T1' as below:
"T1" : {
"properties" : {
"prop1" : {
"type" : "text"
}
}
}
And now I want to change the type of prop1 from text to keyword. I don't want to delete index. I have also read people suggesting to create another property with new type and replace it. But then I have to update old documents which I am not interested into. I tried to use PUT api as below but I never works.
PUT /indexName/T1/_mapping -d
{
"T1" : {
"properties" : {
"prop1" : {
"type" : "keyword"
}
}
}
}
Is there any way to achieve this?
Mapping cannot be modified, hence the PUT api you have used will not work. The new index will have to be created with the updated mapping to be used and reindexing all the data to new index.
To prevent downtime you can always use alias:
https://www.elastic.co/blog/changing-mapping-with-zero-downtime
A mapping cannot be updated once it is persisted. The only option is to create a new index with the correct mappings and reindex your data using the reindex API provided by ES.
You can read about the reindex API here:
https://www.elastic.co/guide/en/elasticsearch/reference/5.5/docs-reindex.html

Change Mapping for Field for ALL OF LOGSTASH Created indexes

I would like to change the type of the field location to geo_point. I'm using ES with Logstash, as y'all know, indices are generated with the name logstash-yyyy-mm-dd
I first created a logstash index and named it logstash-2016-03-29, like so:
curl -XPUT 'http://localhost:9200/logstash-2016-03-29'
then, I changed the mapping for supposedly all the indices called Logstash-* using the following:
curl -XPOST "http://localhost:9200/logstash-*/_mapping/logs" -d '{
"properties" : {
"location" : { "type":"geo_point" }
}
}'
And when I ran the Logstash configuration file, all the location fields in the index logstash-2016-03-29 were indeed of type geo_point.
However, today, the auto-generated index logstash-2016-03-30 had field location of type String instead of geo_point. I thought the type should be applied on ANY index that starts with the name logstash-*. Apparently, I was wrong. How can I fix this so that any future index created by logstash that have the location field would have that field type set to geo_point instead of String?
Thanks.
You should define it using the index template
curl -XPUT localhost:9200/_template/template_2 -d '
{
"template" : "logstash-",
"mappings" : {
"logs" : {
"properties": {
"location" : { "type" : "geo_point" }
}
}
}
}
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html

How to create a common mapping template for indices?

For the app i created, the indices are generated once in a week. And the type and nature of the data is not varying and that implies, I need the same mapping type for these indices. Is it possible in elasticsearch to apply the same mapping to all the indices as they are created?. This could avoid me the overhead of defining mapping each time the index is created.
Definitely, you can use what is called an index template. Since your mapping type is stable, that's the perfect condition for using index templates.
It's as easy as creating an index. See below, whenever you want to index a document in an index whose name matches my_*, ES will select that template and create the index for you using the given mappings, settings and aliases:
curl -XPUT localhost:9200/_template/template_1 -d '{
"template" : "my_*",
"settings" : {
"number_of_shards" : 1
},
"aliases" : {
"my_alias" : {}
},
"mappings" : {
"my_type" : {
"properties" : {
"my_field": { "type": "string" }
}
}
}
}'
It's basically the technique used by Logstash when it needs to index new logs for each new day in a new daily index.
You can employ index template to address your problem. The official documentation can be found here.
A use case of how to apply the same with examples can be found in this blog

Changing elasticsearch mapping

Take the simplest case of indexing the following document in elasticsearch
{
"name": "Mark",
"age": 28
}
With automatic mapping the mapping for this index would now look like
"properties" : {
"doc" : {
"properties" : {
"age" : { "type" : "long"},
"name" : { "type" : "string"
}
}
},
But say I then wanted to allow the case where this document should be indexed
{
"name": "Bill",
"age": "seven"
}
If I try this the mapping does not update and elasticsearch throws an error since there is a conflict with the type of the age property.
Is there any way to do this so both docs could be automatically indexed and consequently queryable?
Mappings are defined per type so what you could do is having two types in your index:
numeric
alphabetical
And split the documents according to the value in the age field. If you run a query you can query both types.
you can add new fields and update a mapping. But you cannot update a mapping.To do that you need to drop the index and create a new mapping and index the data..!
For more info refer this link reference
You can't change existing mapping.You can only add new field in it.
Or you have to delete old mapping & create a new mapping for that particular index.

Resources