Why are my parser types not picked up in my elastic index? - elasticsearch

I'm using fluent-bit to forward logs to an elastic db. All my fields are being indexed in elastic under the default string type but I want some fields indexed as numbers.
I've attempted to set the types in my fluent-bit config by adding a types entry to both the docker parser and the json parser (not sure which one is being used here, these are container logs from a k8s cluster):
[PARSER]
Name json
Format json
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
Types my_float_field:float my_integer_field:integer
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
Types my_float_field:float my_integer_field:integer
But these fields continue to appear as string types in fresh elastic indexes under the ids log_processed.my_float_field and log_processed.my_integer_field. I'm sure I'm doing something obviously wrong but I can see what.
Any pointers would be greatly appreciated.

Use Elasticsearch index templates.
AFAIK the JSON parser plugin doesn't support "Type" parameter. It keeps the original JSON data types, so if my_float_field and my_integer_field contain quoted values, JSON parser will interpret them as strings as well. See this example from the docs:
A simple configuration that can be found in the default parsers
configuration file, is the entry to parse Docker log files (when the
tail input plugin is used):
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S %z
The following log entry is a valid content for the parser defined
above:
{"key1": 12345, "key2": "abc", "time": "2006-07-28T13:22:04Z"}
After processing, it internal representation will be:
[1154103724, {"key1"=>12345, "key2"=>"abc"}]
If you are using Logstash format in Elasticsearch output plugin, you can define an Elasticsearch index template containing the desired type mapping. The template will be applied to each newly created index (not to existing indices). The format changed in Elasticsearch 7, so be sure to check the correct documentation version. For the 7.7 version:
PUT _template/template_1
{
"index_patterns": ["fluent-bit-*"],
"settings": {
"number_of_shards": 1
},
"mappings": {
"properties": {
"my_float_field": {
"type": "float"
},
"my_integer_field": {
"type": "integer"
}
}
}
}

Related

Unexpected result using Elasticsearch when dash character is involved

I'm querying Elasticsearch 2.3 using django-haystack, and the query that is executed seems to be the following:
'imaging_telescopes:(*\\"FS\\-60\\"*)'
An object in my Elasticsearch data has the following value for its property imaging_telescopes: "Takahashi FSQ-106N".
This object matches the query, and to me this result is unepected, I wouldn't want it to match.
My assumption is that it matches becasue it contains the letters FS, but in my frontend I'm just searching for "FS-60".
How can I modify the query so that it's stricter in looking for objects whose property imaging_telescopes exactly contains some text?
Thanks!
EDIT: this is the mapping of the field:
"imaging_telescopes": {
"type": "string",
"analyzer": "snowball"
}

Filebeat date field mapped as type keyword

Filebeat is reading logs from a file, where logs are in the following format:
{"logTimestamp":"2019-11-29T16:39:43.027Z","#version":"1","message":"Hello world","logger_name":"se.lolotron.App","thread_name":"thread-1","level":"INFO","level_value":40000,"application":"my-app"}
So there is a field logTimestamp logged in ISO 8601 time format.
The problem is that this field is mapped as a keyword In Elasticsearch filebeat index
"logTimestamp": {
"type": "keyword",
"ignore_above": 1024
},
On the other hand if I index a similar document in the same Elasticsearch instance but different index, e.g.
POST /new_index/_doc/
{
"message": "hello world",
"logTimestamp":"2019-11-29T16:39:43.027Z"
}
The mapping is
"logTimestamp": {
"type": "date"
},
According to docs here and here by default Elastic should detect a date if formatted with strict_date_optional_time. And strict_date_optional_time is described as
A generic ISO datetime parser where the date is mandatory and the time
is optional.
Which I presume is ISO 8601 and think I proved that with indexing a new doc to new_index in the example above.
Why is logTimestamp saved as keyword in the case of Filebeat? Any ideas?
I'm using Filbeat 7.2.1, Elasticsearch 7.2.1.
Also the default fields.yml is used.
I just found out that date_detection is disabled for filebeat indices by default (Filebeat version 7.2.1).
This can be seen here
var (
// Defaults used in the template
defaultDateDetection = false
...
Does not look like it can be overridden.
The workaround for this is to use experimental feature append_fields (experimental at least at the time of writing this post. See here for more.) and add the following to the filebeat.yml config
setup.template.overwrite: true
setup.template.append_fields:
- name: logTimestamp
type: date
This will make sure that the mapping for logTimestamp is date.

Logstash inserting dates as strings instead of dateOptionalTime

I have an Elasticsearch index with the following mapping:
"pickup_datetime": {
"type": "date",
"format": "dateOptionalTime"
}
Here is an example of a date contained in the file that is being read in
"pickup_datetime": "2013-01-07 06:08:51"
I am using Logstash to read and insert data into ES with the following lines to attempt to convert the date string into the date type.
date {
match => [ "pickup_datetime", "yyyy-MM-dd HH:mm:ss" ]
target => "pickup_datetime"
}
But the match never seems to occur.
What am I doing wrong?
It turns out the date filter was before the csv filter, where the columns get named, hence the date filter was not finding the pickup_datetime column since it had not yet been named.
It might be a good idea to clearly mention the sequentiality of the filters in the documentation to avoid others having similar problems in the future.

Timestamp not appearing in Kibana

I'm pretty new to Kibana and just set up an instance to look at some ElasticSearch data.
I have one index in Elastic Search, which has a few fields including _timestamp. When I go to the 'Discover' tab and look at my documents, each have the _timestamp field but with a yellow warning next to the field saying "No cached mapping for this field". As a result, I can't seem to sort/filter by time.
When I try and create a new index pattern and click on "Index contains time-based events", the 'Time-field name' dropdown doesn't contain anything.
Is there something else I need to do to get Kibana to recognise the _timestamp field?
I'm using Kibana 4.0.
You'll need to take these quick steps first :
Go to Settings → Advanced.
Edit the metaFields and add "_timestamp". Hit save.
Now go back to Settings → Indices and _timestamp will be available in the drop-down list for "Time-field name".
In newer versions you are required to specify the date field before you send your data.
Your date field must be in a standard format such as miliseconds after Epoch (long number) or - just as suggested by MrE - in ISO8601.
See more info here: https://www.elastic.co/guide/en/elasticsearch/reference/current/date.html
Again, before you send your data to the index, you must specify the mapping for this field. In python:
import requests
mapping = '{"mappings": {"your_index": {"properties": {"your_timestamp_field": { "type": "date" }}}}}'
requests.put('http://yourserver/your_index', data=mapping)
...
send_data()
My es version is 2.2.0
You have to the right schema.
I follow the guide
Eg:
{
"memory": INT,
"geo.coordinates": "geo_point"
"#timestamp": "date"
}
If you have the #timestamp, you will see the
ps: if your schema doesn't have "date" field, do not check "Index
contains time-based events
The accepted answer is obsolete as of Kibana 2.0
you should use a simple date field in your data and set it explicitly using either a timestamp, or a date string in ISO 8601 format.
https://en.wikipedia.org/wiki/ISO_8601
you also need to set a mapping to date BEFORE you start sending data apparently.
curl -XPUT 'http://localhost:9200/myindex' -d '{
"mappings": {
"my_type": {
"properties": {
"date": {
"type": "date"
}
}
}
}
}'
Go to Settings->Indices, select your index, and click the yellow "refresh" icon. That will get rid of the warning, and perhaps make the field available in your visualization.

ElasticSearch index unix timestamp

I have to index documents containing a 'time' field whose value is an integer representing the number of seconds since epoch (aka unix timestamp).
I've been reading ES docs and have found this:
http://www.elasticsearch.org/guide/reference/mapping/date-format.html
But it seems that if I want to submit unix timestamps and want them stored in a 'date' field (integer field is not useful for me) I have only two options:
Implement my own date format
Convert to a supported format at the sender
Is there any other option I missed?
Thanks!
If you supply a mapping that tells ES the field is a date, it can use epoch millis as an input. If you want ES to auto-detect you'll have to provide ISO8601 or other discoverable format.
Update: I should also note that you can influence what strings ES will recognize as dates in your mapping. http://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-date-format.html
In case you want to use Kibana, which I expect, and visualize according to the time of a log/entry you will need at least one field to be a date field.
Please note that you have to set the field as date type BEFORE you input any data into the /index/type. Otherwise it will be stored as long and unchangeable.
Simple example that can be pasted into the marvel/sense plugin:
# Make sure the index isn't there
DELETE /logger
# Create the index
PUT /logger
# Add the mapping of properties to the document type `mem`
PUT /logger/_mapping/mem
{
"mem": {
"properties": {
"timestamp": {
"type": "date"
},
"free": {
"type": "long"
}
}
}
}
# Inspect the newly created mapping
GET /logger/_mapping/mem
Run each of these commands in serie.
Generate free mem logs
Here is a simple script that echo to your terminal and logs to your local elasticsearch:
while (( 1==1 )); do memfree=`free -b|tail -n 1|tr -s ' ' ' '|cut -d ' ' -f4`; echo $load; curl -XPOST "localhost:9200/logger/mem" -d "{ \"timestamp\": `date +%s%3N`, \"free\": $memfree }"; sleep 1; done
Inspect data in elastic search
Paste this in your marvel/sense
GET /logger/mem/_search
Now you can move to Kibana and do some graphs. Kibana will autodetect your date field.

Resources