Change geoip.location mapping from number to geo_point - elasticsearch

I am using a logstash filter to convert my filebeat IIS logs into location:
filter {
geoip {
source => "clienthost"
}
}
But the data type in elasticSearch is:
geoip.location.lon = NUMBER
geoip.location.lat = NUMBER
But in order to map points, I need to have
geoip.location = GEO_POINT
Is there a way to change the mapping?
I tried posting a changed mapping
sudo curl -XPUT "http://localhost:9200/_template/filebeat" -d#/etc/filebeat/geoip-mapping-new.json
with a new definition but it's not making a difference:
{
"mappings": {
"geoip": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
},
"template": "filebeat-*"
}
Edit: I've tried this with both ES/Kiabana/Logstash 5.6.3 and 5.5.0

This is not a solution but I deleted all the data and reinstalled ES, Kiabana, Logstash and Filebeat 5.5
And now ES recognizes location as a geopoint - I guess previously even though I had changed the data mapping, there was still data that was mapped incorrectly and Kibana was assuming the incorrect data type - probably a reindex of the complete data would have fixed the problem

Related

Elasticsearch geo_point mapping type overwritten on first save

I realise mapping types are being removed in 7.x but I am working with a solution that uses 6.x.
I have an index I am creating which has a location property. When creating the index I add the following mapping property:
mappings: {
_doc: {
properties: {
location: {
type: 'geo_point'
}
}
}
}
There are other properties that will be in the index but I'm happy for those to be defined automatically (I presume I can do that as elsewhere in the application it has been done this way with no problems).
The index is created ok but when I index my first entity and run a query using the location field I get the following error: failed to find geo_point field [location]
Looking at the mappings now defined in ElasticSearch I can see my location field has now become an object with two float values instead of a geo_point:
{"job-posts":{"aliases":{},"mappings":{"_doc":{"properties":{"location":{"properties":{"lat":{"type":"float"},"lon":{"type":"float"}}},"settings":{"index":{"creation_date":"1591636220162","number_of_shards":"5","number_of_replicas":"1","uuid":"qwAybNlFQ4i3q7IecdZFvA","version":{"created":"6040099"},"provided_name":"job-posts"}}}}
Any ideas as to what I'm doing wrong and why my mapping is being overwritten?
Updated
Right after I create the index the mapping looks like this:
{"job-posts":{"mappings":{"_doc":{"properties":{"location":{"type":"geo_point"}}}}}}
Looks like you forgot to include properties:
PUT myindex?include_type_name=true
{
"mappings": {
"_doc": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
}
If that doesn't help, what's the index mapping right after you create the index but before you sync the first doc?

How to visualize tile-map Kibana elasticsearch

I'm not sure why my data points aren't visualized in the tile-map. I'm dynamically adding the data points through the elasticsearch python client (https://elasticsearch-py.readthedocs.org/en/master/). The visualization keeps returning
Furthermore, here is initial mapping of the geo_point.
{
"mappings": {
"geo": {
"properties": {
"location": {
"type": "geo_point",
"geohash": true,
"geohash_prefix": true
}
}
}
}
}
EDIT:
If your mapping is not set up correctly, Kibana doesn't let you select the geohash aggregation in the config panel on the left. This rather seems to be a problem with the indexed data.
What does the timestamp in your mapping look like? Do you have such recent data that your time selection should return some results for the last 15 minutes? Please check the selection at the top right corner...

How to set existing elastic search mapping from index: no to index: analyzed

I am new to elastic search, I want to updated the existing mapping under my index. My existing mapping looks like
"load":{
"mappings": {
"load": {
"properties":{
"customerReferenceNumbers": {
"type": "string",
"index": "no"
}
}
}
}
}
I would like to update this field from my mapping to be analyzed, so that my 'customerReferenceNumber' field will be available for search.
I am trying to run the following query in Sense plugin to do so,
PUT /load/load/_mapping { "load": {
"properties": {
"customerReferenceNumbers": {
"type": "string",
"index": "analyzed"
}
}
}}
but I am getting following error with this command,
MergeMappingException[Merge failed with failures {[mapper customerReferenceNumbers] has different index values]
Though there exist data associated with these mappings, here I am unable to understand why elastic search not allowing me to update mapping from no-index to indexed?
Thanks in advance!!
ElasticSearch doesn't allow this kind of change.
And even if it was possible, as you will have to reindex your data for your new mapping to be used, it is faster for you to create a new index with the new mapping, and reindex your data into it.
If you can't afford any downtime, take a look at the alias feature which is designed for these use cases.
This is by design. You cannot change the mapping of an existing field in this way. Read more about this at https://www.elastic.co/blog/changing-mapping-with-zero-downtime and https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html.

how to add geo_point type data to elasticsearch from logstash?

I would like to add some custom geo search functions to my program(not geoip, translating ip address into coordinate). How do i filter custom lat and lng data into elasticsearch geo_type format data so that i can visualize in kibana tile map?
so as you may have found out, there is a (somewhat clunky) solution.
basically you need to set the mapping of the geo_point field before you could log data that way (I also used ES python module directly instead logging via logstash.. just to be sure).
so how do you set the correct mapping?
make sure you use a fresh instance of elasticsearch (or at least that the mapping for both the index and the type you will use is not set yet)
run from sense (or use the appropriate curl command)
PUT <index_name>
{
"mappings": {
"<type_name>": {
"properties": {
"timestamp": {
"type": "date"
},
"message": {
"type": "string"
},
"location": {
"type": "geo_point"
}
<etc.>
}
}
}
}
now you're golden, just make sure that your geo_points are in a format that ES excepts
more on mapping geo_points here:
ElasticSearch how to setup geo_point
and here:
https://discuss.elastic.co/t/geo-point-logging-from-python-to-elasticsearch/37336

How to use mapping in elasticsearch?

After treating logs with logstash, All my fields have the same type 'STRING so i want to use mapping in elasticsearch to change some type like ip, port ect.. whereas i don't know how to do it, i'm a super beginner in ElasticSearch..
Any help ?
The first thing to do would be to install the Marvel plugin in Elasticsearch. It allows you to work with the Elasticsearch REST API very easily - to index documents, modify mappings, etc.
Go to the Elasticsearch folder and run:
bin/plugin -i elasticsearch/marvel/latest
Then go to http://localhost:9200/_plugin/marvel/sense/index.html to access Marvel Sense from which you can send commands. Marvel itself provides you with a dashboard about Elasticsearch indices, performance stats, etc.: http://localhost:9200/_plugin/marvel/
In Sense, you can run:
GET /_cat/indices
to learn what indices exist in your Elasticsearch instance.
Let's say there is an index called logstash.
You can check its mapping by running:
GET /logstash/_mapping
Elasticsearch will return a JSON document that describes the mapping of the index. It could be something like:
{
"logstash": {
"mappings": {
"doc": {
"properties": {
"Foo": {
"properties": {
"x": {
"type": "String"
},
"y": {
"type": "String"
}
}
}
}
}
}
}
}
...in this case doc is the document type (collection) in which you index documents. In Sense, you could index a document as follows:
PUT logstash/doc/1
{
"Foo": {
"x":"500",
"y":"200"
}
}
... that's a command to index the JSON object under the id 1.
Once a document field such as Foo.x has a type String, it cannot be changed to a number. You have to set the mapping first and then reindex.
First delete the index:
DELETE logstash
Then create the index and set the mapping as follows:
PUT logstash
PUT logstash/doc/_mapping
{
"doc": {
"properties": {
"Foo": {
"properties": {
"x": {
"type": "long"
},
"y": {
"type": "long"
}
}
}
}
}
}
Now, even if you index a doc with the properties as JSON strings, Elastisearch will convert them to numbers:
PUT logstash/doc/1
{
"Foo": {
"x":"500",
"y":"200"
}
}
Search for the new doc:
GET logstash/_search
Notice that the returned document, in the _source field, looks exactly the way you sent it to Elasticsearch - that's on purpose, Elasticsearch always preserves the original doc this way. The properties are indexed as numbers though. You can run a range query to confirm:
GET logstash/_search
{
"query":{
"range" : {
"Foo.x" : {
"gte" : 500
}
}
}
}
With respect to Logstash, you might want to set a mapping template for index name logstash-* since Logstash creates new indices automatically: http://www.elastic.co/guide/en/elasticsearch/reference/1.5/indices-templates.html

Resources