ElasticSearch MetricBeat mapping issue - elasticsearch

I have installed MetricBeat on my Windows system. And started it. In the configuration metricbeats.yml I have set the elasticsearch property as follows
output.elasticsearch:
_ # Array of hosts to connect to._
_ hosts: [“10.193.164.145:9200”]_
_ template.name: “metricbeat”_
_ template.path: “metricbeat.template.json”_
_ template.overwrite: false_
Now when I start my MetricBeat, I repeatedly get this message in the logs
Can not index event (status=400): "MapperParsingException[mapping [default]]; nested: MapperParsingException[No handler for type [keyword] declared on field [hostname]]; "
What is the issue here?
Is it due to compatibility? My ElasticSearch version is 1.4.x and MetricBeats is 5.5.x
Please do let me know.

1.4 doesn't seem to be supported anymore.
https://discuss.elastic.co/t/metricbeat-compatibility-with-elasticsearch/99213

i don't think there is any matrix right now who support elastic 1.x series with 5.x metrixbeat. but
you can cross check compatibility matrix here
product compatibility matrix
you can check below document also for your reference. Not sure this might be helpful to your problem.
elastic product end of life dates

Related

How to cast a field in Elasticsearch pipelines / Painless script

I have an application which is logging level as integers. I am using filebeat to send the logs to ES. I have set level as string in the ES index, which is working for most of the applications. But when filebeat is receiving an integer, the indexation is failing of course with:
"type":"illegal_argument_exception","reason":"field [level] of type [java.lang.Integer] cannot be cast to [java.lang.String]"
In my document: "level":30
I added a step Script in my ingestion pipeline. But I can't manage to make it work: either I get a compilation error or the script is somehow failing and nothing at all get indexed.
Some very basic script I tried:
if (doc['level'].value == 30) {
doc['level'].value = 'info';
}
Any idea on how to handle this in ES pipelines?
Regards
The best way is to transform data before sending to ES.
You can usse processsor in filebeat to filter you data.
https://www.elastic.co/guide/en/beats/filebeat/current/defining-processors.html

Fluent Bit: Logstash_Prefix_Key is not working as expected with 'es' output plugin

I am trying to lookup a key from a record and use it as logstash prefix in fluent bit. But that's not happening and Logstash_Prefix is not being replaced by Logstash_Prefix_Key even though the specified key exists in the enriched log from kubernetes filter.
The ideal behaviour of a kubernetes filter is to enrich the logs read from input path via input plugin with kubernetes data such as pod name, pod id, namespace name etc. And when the logs after applying the filter is pushed to output source via es output plugin. I used Logstash_Prefix_Key to get the key kubernetes.pod_name and gave Logstash_Prefix as pod_name. Even though I'm able to see kubernetes.pod_name key in Kibana, the logs are getting stored in the prefix pod_name (which means Logstash_Prefix_Key is not found tn log records so it uses Logstash_Prefix).
Code sample
input-kubernetes.conf: |
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 2GB
Skip_Long_Lines On
Refresh_Interval 10
filter-kubernetes.conf: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc.cluster.local:443
Merge_Log Off
K8S-Logging.Parser On
output-elasticsearch.conf: |
[OUTPUT]
Name es
Match kube.*
Host ${FLUENT_ELASTICSEARCH_HOST}
Port ${FLUENT_ELASTICSEARCH_PORT}
HTTP_User ${FLUENT_ELASTICSEARCH_USER}
HTTP_Passwd ${FLUENT_ELASTICSEARCH_PASSWORD}
Logstash_Format On
Logstash_Prefix pod_name
Logstash_Prefix_Key kubernetes.pod_name
Retry_Limit False
Since I am new to EFK stack, Could someone help me with this
UPD: it's now supported! https://github.com/fluent/fluent-bit/issues/421#issuecomment-766912018
Should be in Fluent Bit v1.7 release!
Dynamic ElasticSearch indexes are not supported in FluentBit at the moment. Here's a related issue: https://github.com/fluent/fluent-bit/issues/421. You can only specify a string (hardcoded) prefixes.
The workaround is to use a fluentd log collector instead, which supports dynamic indexes: https://docs.fluentd.org/output/elasticsearch#index_name-optional. There's a community chart for it: https://github.com/helm/charts/tree/master/stable/fluentd
UPD: it's now supported! https://github.com/fluent/fluent-bit/issues/421#issuecomment-766912018
Should be in Fluent Bit v1.7 release!
Was trying to do the same recently and though what Max Lobur said above is true about fluentbit not having support for this prior to the not yet released version 1.7. However, I was still able to achieve this with the current version using the nest filter, see https://docs.fluentbit.io/manual/pipeline/outputs/elasticsearch .under the Logstash_Prefix_Key it says
When included: the value in the record that belongs to the key will be looked up and over-write the Logstash_Prefix for index generation. If the key/value is not found in the record then the Logstash_Prefix option will act as a fallback. Nested keys are not supported (if desired, you can use the nest filter plugin to remove nesting)
The last sentence is about nested keys not supported however you can still use them if you use the nest filter to lift them up a level.
In your case the pod_name is nested under kubernetes, to still be able to use it, you would have to lift it out of that level. see nest example here.
Here's how to make it work in your case:
filter-kubernetes.conf: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc.cluster.local:443
Merge_Log Off
K8S-Logging.Parser On
[FILTER]
Name nest
Match *
Operation lift
Nested_under kubernetes
Add_prefix kubernetes_
output-elasticsearch.conf: |
[OUTPUT]
Name es
Match kube.*
Host ${FLUENT_ELASTICSEARCH_HOST}
Port ${FLUENT_ELASTICSEARCH_PORT}
HTTP_User ${FLUENT_ELASTICSEARCH_USER}
HTTP_Passwd ${FLUENT_ELASTICSEARCH_PASSWORD}
Logstash_Format On
Logstash_Prefix pod_name
Logstash_Prefix_Key kubernetes_pod_name
Retry_Limit False
what we are doing here is lifting everything inside the kubernetes object up a level and prefixing them with kubernetes_, so your pod_name will become kubernetes_pod_name. You then pass the kubernetes_pod_name to Logstash_prefix_key. The value of kubernetes_pod_name is then use for index generation and would only resort back to logstash_prefix if no key/value pair exist for the kubernetes_pod_name
You can use:
Logstash_Prefix_Key kubernetes['pod_name']
This is working on my machine using the docker image: fluent/fluent-bit:1.7

Apache NiFi: PutElasticSearchHttp is not working, with blank error

I currently have Elasticsearch version 6.2.2 and Apache Nifi version 1.5.0 running on the same machine. I'm trying to follow the Nifi example located: https://community.hortonworks.com/articles/52856/stream-data-into-hive-like-a-king-using-nifi.html except instead of storing to Hive, I want to store to Elasticsearch.
Initially I tried using the PutElasticsearch5 processor but I was getting the following error on Elasticsearch:
Received message from unsupported version: [5.0.0] minimal compatible version is: [5.6.0]
When I tried Googling this error message, it seemed like the consensus was to use the PutElasticsearchHttp processor. My Nifi looks like:
And the configuration for the PutElasticsearchHttp processor:
When the flowfile gets to the PutElasticsearchHttp processor, the following error shows up:
PutElasticSearchHttp failed to insert StandardFlowFileRecord into Elasticsearch due to , transferring to failure.
It seems like the reason is blank/null. There also wasn't anything in the Elasticsearch log.
After the ConvertAvroToJson, the data is a JSON array with all of the entries on a single line. Here's a sample value:
{"City": "Athens",
"Edition": 1896,
"Sport": "Aquatics",
"sub_sport": "Swimming",
"Athlete": "HAJOS, Alfred",
"country": "HUN",
"Gender": "Men",
"Event": "100m freestyle",
"Event_gender": "M",
"Medal": "Gold"}
Any ideas on how to debug/solve this problem? Do I need to create anything in Elasticsearch first? Is my configuration correct?
I was able to figure it out. After the ConvertAvroToJSON, the flow file was a single line that contained a JSON Array of JSON indices. Since I wanted to store the individual indices I needed a SplitJSON processor. Now my Nifi looks like this:
The configuration of the SplitJson looks like this:
The index name cannot contain the / character. Try with a valid index name: e.g. sports.
I had a similar flow, wherein changing the type to _doc did the trick after including splitTojSON.

Elastic Search and Spark

I am trying to set up spark and Eleastic search using the elasticsearch-spark library with the sbt artifact: "org.elasticsearch" %% "elasticsearch-spark" % "2.3.2". When I try to configure eleastic search with this code:
val sparkConf = new SparkConf().setAppName("test").setMaster("local[2]")
.set("es.index.auto.create", "true")
.set("es.resource", "test")
.set("es.nodes", "test.com:9200")
I keep getting the error: illegal character for all of the set statements above for elastic search. Anyone know the issue?
You must have copied the code from any website or any other blog. It contains unreadable characters that are actually giving you trouble.
Simple solution: Delete all the content. Type one by one manually, and run it.. Let me know if you faced any problems again, i will help you out.
You might want to set the http.publish_host in your elasticsearch.yml to HOST_NAME. The es-hadoop connector is sniffing the nodes from the _nodes/transport API so it checks what the published http address is.

debugging elasticsearch

I'm using tire and elasticsearch. The service has started using port 9200. However, it was returning 2 errors:
"org.elasticsearch.search.SearchParseException: [countries][0]: from[-1],size[-1]: Parse Failure [Failed to parse source [{"query":{"query_string":{"query":"name:"}}}]]"
and
"Caused by: org.apache.lucene.queryParser.ParseException: Cannot parse 'name:': Encountered "<EOF>" at line 1, column 5."
So, I reinstalled elasticsearch and the service container. Service starts fine.
Now, when I search using tire I get no results when results should appear and I don't receive any error messages.
Does anybody have any idea how I might find out what is wrong, let alone fix it?
first of all, you don't need to reindex anything, in the usual cases. It depends how you installed and configured elasticsearch, but when you install and upgrade eg. with Homebrew, the data are persisted safely.
Second, no need to reinstall anything. The error you're seeing means just what it says on the tin: SearchParseException, ie. your query is invalid:
{"query":{"query_string":{"query":"name:"}}}
Notice that you didn't pass any query string for the name qualifier. You have to pass something, eg:
{"query":{"query_string":{"query":"name:foo"}}}
or, in Ruby terms:
Tire.index('test') { query { string "name:hey" } }
See this update to the Railscasts episode on Tire for an example how to catch errors due to incorrect Lucene queries.

Resources