How to do a query in Grafana with Elasticsearch as source datas? - elasticsearch

I am a beginner with Grafana and ElasticSearch. I added Elasticsearch source in grafana. Now, i would like do a query to show my datas.
An example of one data in Elasticsearch:
{
"_index": "shinken-2016.04.08",
"_type": "shinken-logs",
"_id": "AVP0GFeTmLuZ9eaw1Bjp",
"_score": ​1.0,
"_source":
{
"comment": "",
"plugin_output": "",
"attempt": ​0,
"message": "[1460089115] SERVICE NOTIFICATION: shinken;hostname_test.com;MySQL - TCP;CRITICAL;notify-service-by-email;connect to address 10.11.12.13 and port 1234: No route to host",
"logclass": ​3,
"options": "",
"state_type": "CRITICAL",
"state": ​2,
"host_name": "hostname_test.com",
"#timestamp": "2016-04-08T04:18:35Z",
"time": ​1460089115,
"service_description": "MySQL - TCP",
"logobject": ​2,
"type": "SERVICE NOTIFICATION",
"contact_name": "shinken",
"command_name": "notify-service-by-email"
}
},
My goal it's to show in granafa the state number of one service (here it's MySQL - TCP) for each day (here 2016-04-08).
My question is: How to do a query in Grafana with Elasticsearch as source datas ?

Related

Visualizing lifecycle of events in Kibana

My events in Elasticsearch look something like that (simplified version):
{
"_index": "greatest_index-2023.01",
"_type": "_doc",
"_id": "5BQ8yIUBtpR1CBn8kFyo",
"_version": 1,
"_score": 0,
"_source": {
"#version": "1",
"#timestamp": "2023-01-18T09:35:50.251Z",
"id": "4e80c00dd8e003c8",
"action": "action1"
},
"fields": {
"#timestamp": [
"2023-01-18T09:35:50.251Z"
]
}
}
Basically, the "id" field is common to multiple events. Each id goes through a few "action" field values through time (action1, action2, action3) - only once for each action value.
I'm trying to create a visualization in Kibana that would display the actions each id went through.
If it were a table, it could look something like this :
id
actions
5BQ8yIUBtpR1CBn8kFyo
action1, action 2
pISQ9VDSJVlkqklv9VQ9
action1
cohqBHSQC85AHB67AB2h
action1, action 2, action 3
I tried to use Transforms in the Elasticsearch section of Kibana (v 7.5.0), but it doesn't seem to be the right way.
How would you recommend doing that ?

Auto Increment a field value every time a doc is updated in elasticsearch

This is payload
{
"videourl": "*****",
"name": "ABCqq",
"description": "AAAnb",
"tags": "#AAAzx",
"uploadedtime": "2020-02-24T05:48:37.527Z",
"uploadedby": "Dr AAAgh",
"thumbnail": "http://",
"duration": "5:32",
"postedby": "AAAdf",
"doctorimage": "AAA12",
"doctorname": "nnn",
}
Result in the form
{"_index": "rwe",
"_type": "_doc",
"_id": "8wEed3ABcYN_H8khP4hB",
"_score": 1,
"_source": {
"videourl": "*****",
"name": "ABCqq",
"description": "AAAnb",
"tags": "#AAAzx",
"uploadedtime": "2020-02-24T05:48:37.527Z",
"uploadedby": "Dr AAAgh",
"thumbnail": "http://",
"duration": "5:32",
"postedby": "AAAdf",
"doctorimage": "AAA12",
"doctorname": "nnn"
}
}
This is a document where I want to increment the count value of the field every time when this doc gets updated.
we have to add new field which has the name counter_value.
Expected Resultt
{"_index": "rwe",
"_type": "_doc",
"_id": "8wEed3ABcYN_H8khP4hB",
"_score": 1,
"_source": {
"videourl": "*****",
"name": "ABCqq",
"description": "AAAnb",
"tags": "#AAAzx",
"uploadedtime": "2020-02-24T05:48:37.527Z",
"uploadedby": "Dr AAAgh",
"thumbnail": "http://",
"duration": "5:32",
"postedby": "AAAdf",
"doctorimage": "AAA12",
"doctorname": "nnn",
"counter_value": 1
}
}
You can just increment the counter via scripting, see here and here. However, elastic already has a version field. Depending on your usecase, it might be enough to add the version parameter to your query, as described here:
curl -XGET 'http://localhost:9200/rwe/_search?version=true'

Nginx module for filebeats doesn't parse access logs

I am using nginx module for filebeats to send log data to elasticsearch. Here is my filebeats configuration:
output:
logstash:
enabled: true
hosts:
- logstash:5044
timeout: 15
filebeat.modules:
- module: nginx
access:
enabled: true
var.paths: ["/var/log/nginx/access.log"]
error:
enabled: true
var.paths: ["/var/log/nginx/error.log"]
The problem is that logs are not parsed. This is what I see in Kibana:
{ "_index": "filebeat-2017.07.18", "_type": "log", "_id": "AV1VLXEbhj7uWd8Fgz6M", "_version": 1, "_score": null, "_source": {
"#timestamp": "2017-07-18T10:10:24.791Z",
"offset": 65136,
"#version": "1",
"beat": {
"hostname": "06d09033fb23",
"name": "06d09033fb23",
"version": "5.5.0"
},
"input_type": "log",
"host": "06d09033fb23",
"source": "/var/log/nginx/access.log",
"message": "10.15.129.226 - - [18/Jul/2017:12:10:21 +0200] \"POST /orders-service/orders/v1/sessions/update/FUEL_DISPENSER?api_key=vgxt5u24uqyyyd9gmxzpu9n7 HTTP/1.1\" 200 5 \"-\" \"Mashery Proxy\"",
"type": "log",
"tags": [
"beats_input_codec_plain_applied"
] }, "fields": {
"#timestamp": [
1500372624791
] }, "sort": [
1500372624791 ] }
I am missing parsed fields, as specified in the documentation: https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-nginx.html
Why are log lines not parsed?
When you run filebeat -v -modules=nginx -setup, it will essentially create 4 things:
mapping template
kibana dashboards
machineLearning job
filters in the ingest node
Here are the filters for parsing:
- nginx access log
- nginx error log
The filters are stored in the ingest node. You can access them on:
http://YourElasticHost:9200/_ingest/pipeline
So if you want your logs parsed, you need to send them via the ingest node.

logstash json filter source

I cannot get the message field to decode from my json log line when receiving via filebeat.
Here is the line in my logs:
{"levelname": "WARNING", "asctime": "2016-07-01 18:06:37", "message": "One or more gateways are offline", "name": "ep.management.commands.import", "funcName": "check_gateway_online", "lineno": 103, "process": 44551, "processName": "MainProcess", "thread": 140735198597120, "threadName": "MainThread", "server": "default"}
Here the logstash config. I tried with and without the codec. The only difference is that the message is being escaped when I use the codec.
input {
beats {
port => 5044
codec => "json"
}
}
filter {
json{
source => "message"
}
}
Here is the json as it arrives in elasticsearch:
{
"_index": "filebeat-2016.07.01",
"_type": "json",
"_id": "AVWnpK519vJkh3Ry-Q9B",
"_score": null,
"_source": {
"#timestamp": "2016-07-01T18:07:13.522Z",
"beat": {
"hostname": "59b378d40b2e",
"name": "59b378d40b2e"
},
"count": 1,
"fields": null,
"input_type": "log",
"message": "{\"levelname\": \"WARNING\", \"asctime\": \"2016-07-01 18:07:12\", \"message\": \"One or more gateways are offline on server default\", \"name\": \"ep.controllers.secure_client\", \"funcName\": \"check_gateways_online\", \"lineno\": 80, \"process\": 44675, \"processName\": \"MainProcess\", \"thread\": 140735198597120, \"threadName\": \"MainThread\"}",
"offset": 251189,
"source": "/mnt/ep_logs/ep_.json",
"type": "json"
},
"fields": {
"#timestamp": [
1467396433522
]
},
"sort": [
1467396433522
]
}
What I would like is that contents from the message object are decoded.
Many thanks
When that happens, it's usually because your Filebeat instance is configured to send documents directly to ES.
In your filebeat configuration file, make sure to comment out the elasticsearch output.

How to query logstash indexes with .net elasticsearch client?

I'm trying to use NEST for searching through elastic search indexex that were created with logstash (basically logstash-*).
I have setup NEST with following code:
Node = new Uri("http://localhost:9200");
Settings = new ConnectionSettings(Node);
Settings.DefaultIndex("logstash-*");
Client = new ElasticClient(Settings);
this is how I try to get results:
var result = Client.Search<Logstash>(s => s
.Query(p => p.Term("Message", "*")));
and I get 0 hits:
http://screencast.com/t/d2FB9I4imE
Here is an example of entry I would like to find:
{
"_index": "logstash-2016.06.20",
"_type": "logs",
"_id": "AVVtswJxpdkh1tFPP9S5",
"_score": null,
"_source": {
"timestamp": "2016-06-20 14:04:55.6650",
"logger": "xyz",
"level": "debug",
"message": "Processed command service method SearchService.SearchBy in 65 ms",
"exception": "",
"url": "",
"ip": "",
"username": "",
"user_id": "",
"role": "",
"authentication_provider": "",
"application_id": "",
"application_name": "",
"application": "ZBD",
"#version": "1",
"#timestamp": "2016-06-20T12:04:55.666Z",
"host": "0:0:0:0:0:0:0:1"
},
"fields": {
"#timestamp": [
1466424295666
]
},
"sort": [
1466424295666
]
}
I'm using 5.0.0-alpha3 version, and NEST client is alpha2 version atm.
This is because of
...
"_type": "logs",
...
When you are doing query like yours it will hit logstash not logs type, because NEST infers type name from generic parameter. You have two options to solve this problem.
Tell NEST to map Logstash type to logs type whenever making
request to elasticsearch, by setting this mapping in client's
settings:
var settings = new ConnectionSettings()
.MapDefaultTypeNames(m => m.Add(typeof(Logstash), "logs");
var client = new ElasticClient(settings);
Override default behaviour by setting type explicitly in request
parameters:
var result = Client.Search<Logstash>(s => s
.Type("logs")
.Query(p => p.Term("message", "*")));
Also notice you sould use message not Message in term descriptor
as you don't have such field in index. Second this is as far as I
know wildcards are not supported in term query. You may want to use
query string instead.
Hope it helps.

Resources