Elasticsearch/Kibana Unindexed field cannot be searched - elasticsearch

I'm having some trouble in querying / filtering data on kibana with respect to a geo_point field that is indexed.
Here is a relevant section of the mapping template:
"dstGeoLocation": {
"type": "geo_point"
},
"srcGeoLocation": {
"type": "geo_point"
},
The ingestion happens okay, since the data ends up in ES and am able to view it in Kibana like so:
0,0 is the default that has been given.
However, in Kibana, I still get a message that this is an unindexed field and hence is not searchable.
How do I remedy this situation?
I have already tried to:
Remove and reload the index mappings
Remove and recreate the kibana index pattern (there is no manual refresh in v7.13)
Version of ES and Kibana: 7.13.12

Hi I just fixed the error you are showing by clicking the small refresh button up right in Stack Management > Kibana > Index Patterns > (select/create some pattern)
IMGUR Screenshot
So give it a try.

Related

Overwrite/Update Existing Elasticsearch Index Mapping (geo_point) using Kibana

I am trying to update the mapping for a geo_point field in my elasticsearch index but am running into issues. I am using the dev tool console in Kibana.
The data for the geo_point is in a double array format . I am using spark with the elasticsearch-hadoop-5.3.1.jar and the data is coming into elasticsearch/kibana but remains as a number format while I need to convert it to a geo_point.
It seems that I am unable to update the index mapping once it is defined. I've tried using the method below:
PUT my_index
{
"mappings": {
"my_type": {
"properties": {
"my_location": {
"type": "geo_point"
}
}
}
}
}
-but this results in an "index already exists exception" error.
Thanks for any suggestions.
The command you used just try to create new index with mappings mentioned. For more information read the foot notes in first example here .
As per Elasticsearch documentation, updating mappings of an existing field is not possible.
Updating Field Mappings
In general, the mapping for existing fields cannot be updated. There
are some exceptions to this rule. For instance:
new properties can be added to Object datatype fields.
new multi-fields can be added to existing fields.
the ignore_above parameter can be updated.
As geo_point doesn't fall into any case mentioned above, you cannot modify mappings of that field.
You might need to reindex the data.

Grafana cannot aggregate on String fields as it does not recognize keyword field in Elasticsearch

I have an Elasticsearch (5.1.2) data source and am visualizing the data in Kibana and Grafana (4.1.1). For string values in my dataset I am using the keyword feature as described at https://www.elastic.co/guide/en/elasticsearch/reference/5.2/fielddata.html. An example of the mapping for fieldname "CATEGORY":
"CATEGORY": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword"
}
}
}
In Kibana this works fine as I can select "fieldname.keyword" when creating visualizations. However in Grafana it seems like the keyword field is not recognized, as I can only select "fieldname" when creating graphs, which displays the message "fielddata is disabled on text fields by default".
Can anyone give any insight as to why the keyword field is not being recognized in Grafana? Setting fielddata=true is an option too, however I would really prefer get it working using keyword due to the memory overhead associated with setting fielddata=true. Thanks!
I found the answer to my question here: http://www.mos-eisley.dk/display/it/Elasticsearch+Dashbord+in+Grafana. You can ignore the parts about setting fieldname=true and instead just set it to query the fieldname.keyword when creating the template.
Just a quick note: Something that took me too long to realise is that when grouping by term, "fieldname.keyword" will not be available for selection in the drop down, so you simply have to type it in.

Kibana keeps some fields unindexed

So I have an index in elasticsearch, and I want to search and visualize the index with Kibana. But several fields are not indexed by Kibana, and have this bubble:
This field is not indexed thus unavailable for visualization and search.
This is a snippet of one of the fields that is not indexed by Kibana:
"_event_name" : {
"type" : "string"
},
I tried to enter Kibana's index settings and click "Reload field list", but it doesn't help.
Does anyone knows what could be the problem?
Thanks in advance
The fields might not be indexed as mentioned here.
Apparently, Kibana doesn't index fields that start with underscore.
How are you loading the data into Elasticsearch? Logstash? A Beat? curl? Please describe that and if you can include your config file that would be good.
You can look at your mapping in your browser with something like this;
http://localhost:9200/logstash-2016.07.20/_mapping?pretty
(change the host and index name)

Will updating "_mappings" reflect any changes in Indexed data in Elastic search

I didn't found any change in my search result even after updating some fields in my index[_mapping]. so i want to know that "Will updating "_mappings" reflect re-indexing data in Elastic search" [or] "only data inserted after updation will effect with those index parameters[settings n mappings]"
EX:
Initially i've created my index fields as following
"fname":{
"type":"string",
"boost":5
}
"lname":{
"type":"string",
"boost":1
}
then i inserted some data. its working fine.
After updating my index mapping as following,
"fname":{
"type":"string",
"boost":1
}
"lname":{
"type":"string",
"boost":5
}
Still after updating boost values in index, also i'm getting same result.... why?
1: after each and every updation of index [settings n mapping], will elastic-search re-index the data again?
2: do we have different indexed data in same item-type?
Plz clarify this.
While you can add fields to the mappings of an index, any other change to already existing fields will either only operate on new documents or fail.
As mentioned in the comments to the question, there is an interesting article about zero-downtime index switching and there is a whole section about index management in the definitive guide.

Kibana doesn't show any results in "Discover" tab

I setup elasticsearch and Kibana for indexing our application (error) logs. The issue is that Kibana doesn't display any data in the "Discover" tab.
Current situation
Elasticsearch is up and running, responds to API
executing a query directly on Elasticsearch like http://elasticserver.com:9200/applogs/_search?q=* returns lots of results (see below on how a single found record looks like)
Kibana is up and running, even finds the applogs index exposed by Elasticsearch
Kibana also shows the correct properties and data type of the applogs documents
"Discover" tab doesn't show any results...even when setting the time period to a couple of years...
Any ideas??
Here's how Kibana sees the applogs index:
Elastic search query result object looks like this:
{
_index: "applogs",
_type: "1",
_id: "AUxv8uxX6xaLDVAP5Zud",
_score: 1,
_source: {
appUid: "esb.Idman_v4.getPerson",
level: "trace",
message: "WS stopwatch is at 111ms.",
detail: "",
url: "",
user: "bla bla bla",
additionalInfo: "some more info",
timestamp: "2015-03-31T15:08:49"
}
},
..and what I see in the discover tab:
For people who have a problem like this:
Change time frame in top right corner.
By default it shows data only for last 15 min.
I wanted to put this as a comment but unfortunately, I am not able to given my deficient repo to do so. So as #Ngeunpo suggested, this is how you add a time field to an index while creating it:. If you did not do that while creating your index, I suggest you delete that index and recreate it. The index name logstash-* in the gif is analogous to your index applogs. In this case, field #timestamp is added as the time field. Let me know if this works.
EDIT: Image courtesy: This wonderful ELK setup guide
Kibana does not understand the timestamp field, if it's format is incorrect.Timestamp, which you selected by clicking on Time-field name when Configure an index pattern, need to be :
"timestamp":"2015-08-05 07:40:20.123"
then you should update your index mapping like this:
curl -XPUT 'http://localhost:9200/applogs/1/_mapping' -d'
{
"1": {
"timestamp": {
"enabled": true,
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss.SSS",
"store": true
}
}
}'
See this question and answer
UPDATE
If you are using ES 2.X, you can set the "format" to "epoch_millis" like this:
curl -XPUT 'http://localhost:9200/applogs/1/_mapping' -d'
{
"1": {
"timestamp": {
"type": "date",
"format": "epoch_millis",
"store": true,
"doc_values": true
}
}
}'
Try this: unclick "Index contains time-based events" checkbox
and then provide your index name then check "Discover" whether it contains data or not
I had same issue and this worked for me:
Delete the index from the Settings tab.
restart Kibana
then re-add in Settings
The issues with Time-Series I'm sure can also be an issue, but if no fields actually show up in the Discover tab, then you might have same issue as original reporter and what I had.
I had probably the same issue - I see data in the dashboard but 0 results in discover. Going to Managerment > Index Pattern > Refresh filed list button (a button with refresh icon only) solved it for me.
I had the same issue, and #tAn-'s comment helped me to resolve it. Changing date field to #timestamp did the trick. Thanx!
The next step should be to find out that was wrong with my custom date field.
I had the same problem, but now its working fine.
The problem was with the #timestamp. Actually I have uploaded the file to elasticsearch using logstash thus it automatically generate a #timestamp field. Kibana compare time range with this #timestamp,that is, when the actual event occurred.Even if I deselect "Index contains time-based events" option in add new index pattern page, kibana will automatically consider the #timestamp field.So toggle with timeframe on kibana based on the #timestamp field worked for me.
You can also check by adding index pattern with out a time stamp and deselect "Index contains time-based events" option.See what happens ..now there wont be any time frame select option in kibana discover page and you will most probably get the result in discover page.
These are all my observations, not sure, this solution fits your case ..you may try..
I am using ES 1.5x, logstash 1.5.1 and kibana 4.1.0
I also experienced the same error. Mostly this happens because of time format. Basically, make sure you have valid time frame for your data (top-right filter). Anyway,in my case, I used epoch time format for timestamp but it didn't work. So I changed to epoch_millisec instead and it worked like a charm.
In sum, make sure that Kibana can understand your date time format. It is required epoch_millisec by default not just epoch.
In my situation, everything was working previously and then I couldn't see the latest data starting February 1st (actually, I could if I looked back a month). It turns out that the mapping format for my custom timefield was incorrect. My mapping format was YYYY-MM-DD'T'HH:mm:ss.SSSZ. The problem was that DD is interpreted as day of the year and I wanted day of the month which is dd. Changing the mapping and reindexing fixed the problem.
In my case, I set time from server log.
and the time was different with UTC(the log's time was future comparing to UTC time)
so, when I search logs with filter of days/months/years ago. there was no log because it was future time.
so, when I use Today filter. or with future time.
It showed the logs.
after, changing the time zone. it's fixed
I had the same issue, So, as shown in one of the solutions above, I went to settings and deleted the previous index and made a new with #timestamp.
But that didnt solve the issue. So, I looked into the issue and saw, after a deployment, there was nothing coming into Kibana.
So, I went into the server, and saw that the indexes were corrupted. SO I just stopped the logstash and elasticsearch on the instance/server and restarted the service.
And Voila , It successfully restarted the service and kibana was back.
WHY DID IT HAPPEN ?
Someone might have stopped the server abruptly which caused indexes to get corrupted.

Resources