Timestamp not appearing in Kibana - elasticsearch

I'm pretty new to Kibana and just set up an instance to look at some ElasticSearch data.
I have one index in Elastic Search, which has a few fields including _timestamp. When I go to the 'Discover' tab and look at my documents, each have the _timestamp field but with a yellow warning next to the field saying "No cached mapping for this field". As a result, I can't seem to sort/filter by time.
When I try and create a new index pattern and click on "Index contains time-based events", the 'Time-field name' dropdown doesn't contain anything.
Is there something else I need to do to get Kibana to recognise the _timestamp field?
I'm using Kibana 4.0.

You'll need to take these quick steps first :
Go to Settings → Advanced.
Edit the metaFields and add "_timestamp". Hit save.
Now go back to Settings → Indices and _timestamp will be available in the drop-down list for "Time-field name".

In newer versions you are required to specify the date field before you send your data.
Your date field must be in a standard format such as miliseconds after Epoch (long number) or - just as suggested by MrE - in ISO8601.
See more info here: https://www.elastic.co/guide/en/elasticsearch/reference/current/date.html
Again, before you send your data to the index, you must specify the mapping for this field. In python:
import requests
mapping = '{"mappings": {"your_index": {"properties": {"your_timestamp_field": { "type": "date" }}}}}'
requests.put('http://yourserver/your_index', data=mapping)
...
send_data()

My es version is 2.2.0
You have to the right schema.
I follow the guide
Eg:
{
"memory": INT,
"geo.coordinates": "geo_point"
"#timestamp": "date"
}
If you have the #timestamp, you will see the
ps: if your schema doesn't have "date" field, do not check "Index
contains time-based events

The accepted answer is obsolete as of Kibana 2.0
you should use a simple date field in your data and set it explicitly using either a timestamp, or a date string in ISO 8601 format.
https://en.wikipedia.org/wiki/ISO_8601
you also need to set a mapping to date BEFORE you start sending data apparently.
curl -XPUT 'http://localhost:9200/myindex' -d '{
"mappings": {
"my_type": {
"properties": {
"date": {
"type": "date"
}
}
}
}
}'

Go to Settings->Indices, select your index, and click the yellow "refresh" icon. That will get rid of the warning, and perhaps make the field available in your visualization.

Related

Elasticsearch - check which all fields are indexed in an index?

I browse to curl -XGET 'http://localhost:9200/<index_name>/_mappings and it returns the fields in an index.
This is one of the field.
{"dob" : { "type": "string", "analyzer" : "my_custom_analyzer"}}
With above response, does this mean DOB field is by default indexed? or index: true has to be explicitly there for this field to be indexed?
It seems you're using a very old version of Elasticsearch, likely 2.x or earlier.
However, based on the mapping you've shared, string fields are indexed by default, in your case, dob is analyzed by a custom analyzer called my_custom_analyzer and the resulting tokens will be indexed automatically.

How to treat certain field values as null in `Elasticsearch`

I'm parsing log files which for simplicity's sake let's say will have the following format :
{"message": "hello world", "size": 100, "forward-to": 127.0.0.1}
I'm indexing these lines into an Elasticsearch index, where I've defined a custom mapping such that message, size, and forward-to are of type text, integer, and ip respectively. However, some log lines will look like this :
{"message": "hello world", "size": "-", "forward-to": ""}
This leads to parsing errors when Elasticsearch tries to index these documents. For technical reasons, it's very much untrivial for me to pre-process these documents and change "-" and "" to null. Is there anyway to define which values my mapping should treat as null ? Is there perhaps an analyzer I can write which works on any field type whatsoever that I can add to all entries in my mapping ?
Basically I'm looking for somewhat of the opposite of the null_value option. Instead of telling Elasticsearch what to turn a null_value into, I'd like to tell it what it should turn into a null_value. Also acceptable would be a way to tell Elasticsearch to simply ignore fields that look a certain way but still parse the other fields in the document.
So this one's easy apparently. Add the following to your mapping settings :
{
"settings": {
"index": {
"mapping": {
"ignore_malformed": "true"
}
}
}
}
This will still index the field (contrary to what I've understood from the documentation...) but it will be ignored during aggregations (so if you have 3 entries in an integer field that are "1", 3, and "hello world", an averaging aggregation will yield 2).
Keep in mind that because of the way the option was implemented (and I would say this is a bug) this still fails for and object that is entered as a concrete value and vice versa. If you'd like to get around that you can set the field's enabled value to false like this :
{
"mappings": {
"my_mapping_name": {
"properties": {
"my_unpredictable_field": {
"enabled": false
}
}
}
}
}
This comes at a price though, since this means the field won't be indexed, but the values entered will be still be stored so you can still accessing them by searching for that document through another field. This usually shouldn't be an issue as you likely won't be filtering documents based on the value of such an unpredictable field, but that depends on your specific case use. See here for the official discussion of this issue.

remove field and add new field to the mapping in elastic search index

I made a mistake at the time of first index creation for one field. Instead of "integer" data type mistakenly I assigned as "string" for the field "rating". But the data stored in the field was integer only. When I'm trying to calculate the average rating aggregation it was throwing an error because of the string data type.
Is there a way to change the data type of the field with out reindexing?
If not possible without reindex, how can I remove the rating field and add a rating field with "integer" data type?
Help me on this issue to resolve.
Update
Deleted the type in the index by using below command
curl -XDELETE 'http://localhost:9300/feedbacks_16/responses'
Deleted the type and created the type with the same name and changed the data type for my rating field and re indexed the entire data. Everything goes fine until reindexing. But the avg query not working.
Below is the error I'm getting :
{ "error": "SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[2NsGsPYLR2eP9p2zYSnKGQ][feedbacks-16][0]: ClassCastException[org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexNumericFieldData]}{[2NsGsPYLR2eP9p2zYSnKGQ][feedbacks_16][1]: ClassCastException[org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexNumericFieldData]}{[pcwXn3X-TceUO0ub29ZFgA][feedbacks_16][2]: RemoteTransportException[[tsles02][inet[/localhost:9300]][indices:data/read/search[phase/query]]]; nested: ClassCastException; }{[pcwXn3X-TceUO0ub29ZFgA][feedbacks_16][3]: RemoteTransportException[[tsles02][inet[/localhost:9300]][indices:data/read/search[phase/query]]]; nested: ClassCastException; }{[2NsGsPYLR2eP9p2zYSnKGQ][feedbacks_16][4]: ClassCastException[org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexNumericFieldData]}]", "status": 500 }
Aside from a few exceptions, a mapping cannot be updated. There are some exceptions:
you can add new properties
you can promote a simple field to a multi-field
you can disable doc_values (but not enable them)
you can update the ignore_above parameter
So if you wish to transform your rating field from string to integer without recreating a new index, the only solution you have is to create a sub-field (e.g. called rating.int) of type integer.
Note that you'll still have to reindex your data in order to populate that new sub-field, though. However, if you do so, you'd be much better off simply re-creating a clean index from scratch and re-populating it.
1) You can change data type of field without reindexing but the problem is that It wont affect your data i.e rating field will remain string as documents are stored in immutable segments but newly added field will be integer but again that wont solve your problem
2) You could delete all documents from your current index and then change the mapping with PUT API like this
$ curl -X PUT 'http://localhost:9200/your_index/your_type/_mapping?ignore_conflicts=true' -d
'{
"your_type": {
"properties": {
"rating": {
"type": "integer"
}
}
}
}'
and then reindex
but better would be to create new index with above mapping and reindex with zero downtime

ElasticSearch sorting doesn't work

I am trying to do sorting by "timestamp" field (which is not a "_timestamp" default).
"timestamp" field stores micro-unixtime in
"_source": {
...
"timestamp": "1381256450000"
...
}
On screenshot you can see this value in 'sort' (right side). Last record should be definitely on the top of the result. But it isn't.
Screenshot:
From elasticsearch website :
By default, the search request will fail if there is no mapping associated with a field. The ignore_unmapped option allows to ignore fields that have no mapping and not sort by them.
You have to map the field if you want to use it to sort the results.

ElasticSearch index unix timestamp

I have to index documents containing a 'time' field whose value is an integer representing the number of seconds since epoch (aka unix timestamp).
I've been reading ES docs and have found this:
http://www.elasticsearch.org/guide/reference/mapping/date-format.html
But it seems that if I want to submit unix timestamps and want them stored in a 'date' field (integer field is not useful for me) I have only two options:
Implement my own date format
Convert to a supported format at the sender
Is there any other option I missed?
Thanks!
If you supply a mapping that tells ES the field is a date, it can use epoch millis as an input. If you want ES to auto-detect you'll have to provide ISO8601 or other discoverable format.
Update: I should also note that you can influence what strings ES will recognize as dates in your mapping. http://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-date-format.html
In case you want to use Kibana, which I expect, and visualize according to the time of a log/entry you will need at least one field to be a date field.
Please note that you have to set the field as date type BEFORE you input any data into the /index/type. Otherwise it will be stored as long and unchangeable.
Simple example that can be pasted into the marvel/sense plugin:
# Make sure the index isn't there
DELETE /logger
# Create the index
PUT /logger
# Add the mapping of properties to the document type `mem`
PUT /logger/_mapping/mem
{
"mem": {
"properties": {
"timestamp": {
"type": "date"
},
"free": {
"type": "long"
}
}
}
}
# Inspect the newly created mapping
GET /logger/_mapping/mem
Run each of these commands in serie.
Generate free mem logs
Here is a simple script that echo to your terminal and logs to your local elasticsearch:
while (( 1==1 )); do memfree=`free -b|tail -n 1|tr -s ' ' ' '|cut -d ' ' -f4`; echo $load; curl -XPOST "localhost:9200/logger/mem" -d "{ \"timestamp\": `date +%s%3N`, \"free\": $memfree }"; sleep 1; done
Inspect data in elastic search
Paste this in your marvel/sense
GET /logger/mem/_search
Now you can move to Kibana and do some graphs. Kibana will autodetect your date field.

Resources