I am working on a Filebeat project for indexing logs, in a Json format.
I see in the configuration that there is the option json.message_key: message
I don't really understand, what is this for, if I remove it, I see no change.
Can someone explain me ?
Logs are in this format :
{"appName" : "blala", "version" : "1.0.0", "level":"INFO", "message": "log message"}
Message is the default key for raw content line.
So if you remove if from the config, filebeat will still use message, and apply grok on it.
If you change it to "not-a-message", you should see a difference. But you should not do it as every automation depend on it.
Related
I'm trying to extract two fields from my unstructured logs logstash. My log messages look like this:
[2/9/2022 7:32:16 PM] logmessage
I have this Grok:
grok {
match => { "message" => "\[(?<app_log_date>\d{1,2}/\d{1,2}/\d{4} (1[0-2]|0?[1-9]):[0-5][0-9]:[1-9][0-9] (AM|PM))\] %{GREEDYDATA:app_message}" }
}
When I put this in the Grok debugger, it works perfectly fine, but when I put this in my logstash.conf, it produces malformed messages in my ElasticSearch output and a _grokparsefailure. Any idea what I'm doing wrong here? Do I need to escape the brackets?
I just checked my logs in the morning and looks like they are getting parsed correctly! Not sure if it was my test logs that I was forcing via VS Code or what, but it is working as indented now.
While setting up metric reporting for Apache Kafka to ElasticSearch with jmxtrans, we have written a configuration file that queries about 50 metrics.
The queries are as follows:
{
"obj" : "kafka.server:type=BrokerTopicMetrics,name=TotalFetchRequestsPerSec",
"outputWriters" : [ {
"#class" : "com.googlecode.jmxtrans.model.output.elastic.ElasticWriter",
"connectionUrl": "http://elasticHost:9200"
}]
}
Since there are so many of them all writing to the same destination, is there a way in the config file to shorten this?
Any help is highly appreciated.
You can try to be more precise in your MBean path -
kafka.server:name=TotalFetchRequestsPerSec,topic=MyCoolTopic,type=BrokerTopicMetrics
Take a look on this one as a great example - jmxtrans supports resultAlias as well.
Here you can find a list of Kafka MBeans which could become handy for you.
We're ingesting data to Elasticsearch through filebeat and hit a configuration problem.
I'm trying to specify a date format for a particular field (standard #timestamp field holds indexing time and we need an actual event time). So far, I was unable to do so - I tried fields.yml, separate json template file, specifying it inline in filebeat.yml. That last option is just a guess, I haven't found any example of this particular configuration combo.
What am I missing here? I was sure this should work:
filebeat.yml
#rest of the file
template:
# Template name. By default the template name is filebeat.
#name: "filebeat"
# Path to template file
path: "custom-template.json"
and in custom-template.json
{
"mappings": {
"doc": {
"properties": {
"eventTime": {
"type": "date",
"format": "YYYY-MM-dd HH:mm:ss.SSSS"
}
}
}
}
}
but it didn't.
We're using Filebeat version is 6.2.4 and Elasticsearch 6.x
I couldn't get the Filebeat configuration to work. So in the end changed the time field format in our service and it worked instantly.
I found official Filebeat documentation to be lacking complete examples. May be that's just my problem
EDIT actually, it turns out you can specify a list of allowed formats in your mapping
I currently have Elasticsearch version 6.2.2 and Apache Nifi version 1.5.0 running on the same machine. I'm trying to follow the Nifi example located: https://community.hortonworks.com/articles/52856/stream-data-into-hive-like-a-king-using-nifi.html except instead of storing to Hive, I want to store to Elasticsearch.
Initially I tried using the PutElasticsearch5 processor but I was getting the following error on Elasticsearch:
Received message from unsupported version: [5.0.0] minimal compatible version is: [5.6.0]
When I tried Googling this error message, it seemed like the consensus was to use the PutElasticsearchHttp processor. My Nifi looks like:
And the configuration for the PutElasticsearchHttp processor:
When the flowfile gets to the PutElasticsearchHttp processor, the following error shows up:
PutElasticSearchHttp failed to insert StandardFlowFileRecord into Elasticsearch due to , transferring to failure.
It seems like the reason is blank/null. There also wasn't anything in the Elasticsearch log.
After the ConvertAvroToJson, the data is a JSON array with all of the entries on a single line. Here's a sample value:
{"City": "Athens",
"Edition": 1896,
"Sport": "Aquatics",
"sub_sport": "Swimming",
"Athlete": "HAJOS, Alfred",
"country": "HUN",
"Gender": "Men",
"Event": "100m freestyle",
"Event_gender": "M",
"Medal": "Gold"}
Any ideas on how to debug/solve this problem? Do I need to create anything in Elasticsearch first? Is my configuration correct?
I was able to figure it out. After the ConvertAvroToJSON, the flow file was a single line that contained a JSON Array of JSON indices. Since I wanted to store the individual indices I needed a SplitJSON processor. Now my Nifi looks like this:
The configuration of the SplitJson looks like this:
The index name cannot contain the / character. Try with a valid index name: e.g. sports.
I had a similar flow, wherein changing the type to _doc did the trick after including splitTojSON.
kibana 4 gives an error "Discover: An error occurred with your request. Reset your inputs and try again" 80% of the time when I try to sort by a numeric field. It works fine when sorted by any other field. Did anyone get this issue?
I had this when I had added a sequence number in logstash to index (because several logs could be added in the same millisecond, causing the sort not to show the ordering correctly).
If you open up firefox debugger and view the console, it will show you more information related to the error. In my case
java.lang.Long cannot be cast to org.apache.lucene.util.BytesRef
I added
{ "unmapped_type": "number" }
into the advance settings - sort:options. It returns sorted data correctly but appears to throw a yellow warning.
Yes. I had that issue and opening the browser's javascript console helped me to see that a non-JSON document was the problem. Apparently, you can store non-JSON in elasticsearch (at least I can with 1.6.2). That creates problems with Kibana.
So: Open the browser's console, look for "error parsing body" or smth similar. You should also get the faulty string. Use that to identify to culprit document.