ELK Heartbeat dashbaord add ID - elasticsearch

How can I add ID field to Kibana / Uptime dashboard?

Installed ver 7.3. This shows "NAME" as one of the column headers. You can specify what is displayed in there in monitor configuration:
- type: http
name: 'QA.Service - THIS HERE'
enabled: true
schedule: '#every 5m'
urls: ["http://checkstatus/blah/blah"]
check.response:
status: 200
json:
- description: Json Response
condition:
equals:
Status: Ok

Related

Elastic HeartBeat HTTP

I am working on a monitoring solution, and I am facing some issues. I need to create a request towards a URL, and parse the response. I have tried the following, but cannot find the message in grafana :
- type: http
urls: ["https://securpharm-status.de/status"]
schedule: '#every 600s'
proxy_url: http://user#1.1.1.1
tags: ["name"]
name: "status"
check.request:
method: GET
check.response:
json:
- description: check status
condition:
equals:
status: ok
also tried many other options, but still only the response code has been returned.

How is the actual Request URL generated from a custom connector api definition

Since there isn't currently a native databricks powerapps connector, I set up a custom connector that calls the jobs runnow api using an AD backed service principle bearer token. I then shared the custom connector and the connection with users for them to run. While this is working fine for everyone on my team, the actual end users are getting a 401 error response. Since everyone on my team are admins in Azure and Databricks, it's likely we have permissions that the end users don't. However, we're having a hard time pinpointing where that might be.
During PowerApps monitoring sessions, we did notice that the Request URL is different for the users getting successful 200 responses, versus those getting 401. The odd thing is, neither matches the specified host in the custom connector. Can anyone tell me how this is generated?
Here is the custom connector code we're using:
swagger: '2.0'
info: {title: Databricks Jobs, description: Call Databricks Jobs API,
version: '1.0'}
host: <databricksInstance>.azuredatabricks.net
basePath: /api/2.1/jobs/
schemes: [https]
consumes: []
produces: []
paths:
/run-now:
post:
responses:
default:
description: default
schema:
type: object
properties:
run_id: {type: integer, format: int32, description: run_id}
number_in_job: {type: integer, format: int32, description: number_in_job}
summary: Start Databricks Notebook as a Job
description: Start Databricks Notebook as a Job
operationId: run-now
x-ms-visibility: important
parameters:
- {name: Content-Type, in: header, required: true, type: string, default: application/json,
x-ms-visibility: internal}
- {name: Accept, in: header, required: true, type: string, default: application/json,
x-ms-visibility: internal}
- name: body
in: body
required: true
schema:
type: object
properties:
job_id: {type: integer, format: int64, description: job_id, title: ''}
required: [job_id]
definitions: {}
parameters: {}
responses: {}
securityDefinitions:
API Key: {type: apiKey, in: header, name: Authorization}
security:
- API Key: []
tags: []
Here is the actual request as seen in PowerApps monitor that returns a successful 200 code (same url for everyone getting a success):
And here is the actual request sent that returns the 401 error:
As you can see, the request url is the same for both except for that last highlighted string. Can anyone tell me how that is generated from the specified host in the API call definition? Any idea what that may signify?

Google Cloud Ops Agent does not save logs when time_key is set

Applies to configuration: Logging processors
This setup works:
/etc/google-cloud-ops-agent/config.yaml
logging:
receivers:
app:
type: files
include_paths: [/www/logs/app-*.log]
processors:
monolog:
type: parse_regex
field: message
regex: "^\[(?<time>[^\]]+)\]\s+(?<environment>\w+)\.(?<severity>\w+):\s+(?<msg>.*?)(?<context>{.*})?\s*$"
service:
pipelines:
default_pipeline:
receivers: [app]
processors: [monolog]
I am trying to configure time_key, but the logs do not show up in the log viewer. I call the API whether the logs are being processed, whether they are read and sent. They come out but are not in the log viewer.
logging:
receivers:
app:
type: files
include_paths: [/www/logs/app-*.log]
processors:
monolog:
type: parse_regex
field: message
regex: "^\[(?<time>[^\]]+)\]\s+(?<environment>\w+)\.(?<severity>\w+):\s+(?<msg>.*?)(?<context>{.*})?\s*$"
time_key: time
time_format: "%Y-%m-%d %H:%M:%S"
service:
pipelines:
default_pipeline:
receivers: [app]
processors: [monolog]
Log structure:
[2021-10-06 12:12:08] production.EMERGENCY: Testing {"abc":"xyz"}
Parsed (first code example):
{
jsonPayload: {
context: "{"abc":"xyz"}"
environment: "production"
msg: "Testing "
severity: "EMERGENCY"
time: "2021-10-06 12:12:08"
}
}
API call to check logs processed:
curl -s localhost:2020/api/v1/metrics | jq
From strptime(3) also tried to use "%F %H:%M:%S"
What am I doing wrong?

FileBeat: Only host field shown as JSON, not as string

I am working on Filebeat, where I am pushing the data from our application and system logs to ES domain on AWS. It's working fine, just that the host field as a type is shown as JSON instead of plain-text. I checked the fields.yml file, but no reference to host with JSON as the output.
filebeat.yml :
filebeat.inputs:
- type: log
paths:
- /var/log/nginx/*.log
fields:
type: develop.gateway.nginx.log
environment: develop.gateway
service: nginx
document_type: filebeat.develop.gateway
registry: /var/lib/filebeat/registry
- type: log
paths:
- /var/www/html/api-gateway/deploy/var/log/*.log
fields:
type: develop.gateway.application.log
environment: develop.gateway.application
service: gateway
document_type: filebeat.gateway.develop
registry: /var/lib/filebeat/registry
- type : log
paths:
- /var/log/php*.log
fields:
type: develop.1c.php-fpm.log
environment: develop.1c
service: php-fpm
document_type: filebeat.php-fpm.1c
registry: /var/lib/filebeat/registry
output.elasticsearch:
hosts: ["OUR_DOMAN"]
protocol: "https"
You can pick out the hostname from the payload as i do here:
output.kafka:
...
codec.format:
string: '{"version":"1.1","timestamp":"%{[#timestamp]}","short_message":"%{[message]}","file":"%{[log.file.path]}","host":"%{[host.hostname]}"}'
...

Transfer httpenpoinds metrics using http module-metricbeat

In order to ship the metrics endpoints (/metric) of my spring-boot app ,I used http module in metricbeat , I'm following the official doc in elastic.io website to install and configure metricbeat, but unfrontenly, the transport of metrics was incorrect although the connection had been established.
.I changed fields.yml file to create a specific template for http module only
fields.yml
- key: http
title: "HTTP"
description: >
HTTP module
release: beta
settings: ["ssl"]
fields:
- name: http
type: group
description: >
fields:
- name: request
type: group
description: >
HTTP request information
fields:
- name: header
type: object
description: >
The HTTP headers sent
- name: method
type: keyword
description: >
The HTTP method used
- name: body
type: keyword
description: >
The HTTP payload sent
- name: response
type: group
description: >
HTTP response information
fields:
- name: header
type: object
description: >
The HTTP headers received
- name: code
type: keyword
description: >
The HTTP status code
example: 404
- name: phrase
type: keyword
example: Not found
description: >
The HTTP status phrase
- name: body
type: keyword
description: >
The HTTP payload received
- name: json
type: group
description: >
json metricset
release: beta
fields:
- name: server
type: group
description: >
server
release: experimental
fields:
metricbeat.yml
metricbeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
metricbeat.modules:
#------------------------------- HTTP Module -------------------------------
- module: http
metricsets: ["json"]
period: 10s
hosts: ["http://localhost:8080/metrics"]
namespace: "test_metrics"
method: "GET"
enabled: true
setup.template.overwrite: true
output.elasticsearch:
hosts: ["localhost:9200"]
My app metrics display:(http://localhost:8080/metrics)
{
"mem": 199405,
"mem.free": 74297,
"processors": 4,
"instance.uptime": 45240231,
"uptime": 45254636,
"systemload.average": -1,
"heap.committed": 154624,
"heap.init": 131072,
"heap.used": 80326,
"heap": 1842688,
"nonheap.committed": 45888,
"nonheap.init": 2496,
"nonheap.used": 44781,
"nonheap": 0,
"threads.peak": 31,
"threads.daemon": 25,
"threads.totalStarted": 35,
"threads": 27,
"classes": 6659,
"classes.loaded": 6659,
"classes.unloaded": 0,
"gc.ps_scavenge.count": 24,
"gc.ps_scavenge.time": 999,
"gc.ps_marksweep.count": 1,
"gc.ps_marksweep.time": 71,
"httpsessions.max": -1,
"httpsessions.active": 0,
"gauge.response.metrics": 20,
"gauge.response.unmapped": 6005,
"gauge.response.login": 1,
"gauge.response.star-star.favicon.ico": 1878,
"counter.status.200.star-star.favicon.ico": 1,
"counter.status.200.metrics": 30,
"counter.status.302.unmapped": 3,
"counter.status.200.login": 2
}
Previously, I used httpbeat and everything was great,the name of fields in elasticsearch index are compatible ... , since I've moved to use http module everything had been changed ,I'm using it to get a predefined dashboards in kibana .
Any help please ?

Resources