I have connected grafana to an elasticsearch data source. The goal is to create metric dashboards out of elastic search logs. I have two lucene queries in grafana. The first query retrieves all logs in the specified log path
log.file.path:\/data\/gitlab\/logs\/gitlab-rails\/release_job_logs.json AND fields.environment:production
The second query retrieves logs in the specified path that has a ‘success’ value in the json.job_status field
log.file.path:\/data\/gitlab\/logs\/gitlab-rails\/release_job_logs.json AND fields.environment:production AND json.job_status:success
I will like to create a dashboard that creates a percentage of the logs with a ‘success’ value in the json.job_status field.
So essentially if the first query gives me a count of 100 and the second query gives a count of 90 then the dashboard should display 90%. Which will mean 90% of all logs have a json.job_status:success
The image below shows what I have now from the two queries above. How do I get a percentage dashboard
Related
Grafana version: 8.2.0. Start server by docker.
I want to achieve x_axis: point value interval by 500 , y_axis: unique count device id.
Implement with kibana like this.
Use same method in grafana. I got this. Aggregation result is error.
Row data format:
So how can I implement in grafana like kibana visualization?
I have a field in elastic search loaded that has information in it as:
message: Requesting 30 containers
message: Requesting 40 containers
.
.
.
message: Requesting 50 containers
I want to get a total of all containers used in the job. (30+40+50=120, in this case).
Is it more efficient to extract these values in a field in logstash and then use aggregation queries in elasticsearch or given the message above everything is possible in elasticsearch?
Also, if I write a aggregation query in Dev Tools of Kibana, then is it possible to store the result to be used for visualization?
It is better and is the solution to extract the number in logstash and then use it in aggregations
No , You cant use a string in sum aggregation , Everything is never possible
You dont need you write aggregation query in dev tools if you are using kibana , in kibana you can do aggregations without writing queries
I tried to enable logs in the elastic search server using the below link
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-slowlog.html
I verified my index setting using the url
http://localhost:9200/_all/_settings
The result is below
{"myindex":{"settings":{"index":{"search":{"slowlog":{"threshold":{"fetch":{"warn":"1ms","trace":"1ms","debug":"1ms","info":"1ms"},"query":{"warn":"1ms","trace":"1ms","debug":"1ms","info":"1ms"}}}},"number_of_shards":"3","provided_name":"occindex","creation_date":"1508319257925","number_of_replicas":"2","uuid":"dVAWgk62Sgivzr2B_OuCzA","version":{"created":"5040399"}}}}}
As per the document, I expect the logs to be populated when the threshold is breached.
I have set 1 ms as the threshold in order to log all queries that are hitting elastic search
I observed that under logs folder , the log files elasticsearch_index_search_slowlog.log and elasticsearch.log does not show the queries which are hitting elastic search.
Let me know if my configuration is correct.
The log worked after I inserted one record.
If you fire the query when there are no records in the index , the log was not updated
I have a data log entry stored in elasticsearch, each with its own timestamp. I now have a dashboard that can get the aggregation by day / week using Date Histogram aggregation.
Now I want to get the data in chunk (data logs are written several time per transaction, spanning for up to several minutes) by analyzing the "cluster" of logs according to its timestamp to identify whether it's the same "transaction". Would that be possible for Elastic search to automatically analyze the meaningful bucket and aggregate the data accordingly?
Another approach I'm trying is to group the data by transaction ID - however there's a warning that to do this I need to enable fielddata which will use a significant amount of memory. Any suggestion?
I am new to logstash. I have set up my logstash to populate elastic search and have Kibana read out of it. The problem I am facing is that after the
number of records = results per page x page limit
the UI stops getting new results. Is there a way to set Kibana up such that it discards the old results instead of the latest after the limit is reached?
To have kibana read the latest results, reload the query.
To have more pages available (or more results per page), edit the panel.
Make sure the table is reverse sorted by #timestamp.