How to export spring boot metrics as single json - spring-boot

Spring boot actuator's /metrics returns list of metrics names
{
"names": [
"jvm.gc.pause",
"tomcat.global.received",
"jvm.memory.used",
...
]
}
I want to configure the metrics aggregation via elastic search metric beat. I use elastic search auto discovery and http module
metricbeat.autodiscover:
providers:
- type: kubernetes
host: ${HOSTNAME}
templates:
- condition.equals:
kubernetes.labels.app: employee-rewards
config:
- module: http
metricsets: json
period: 10s
hosts: ["${data.host}:8080"]
namespace: "app_metrics"
path: /actuator/metrics/jvm.gc.pause
method: "GET"
- module: http
metricsets: json
period: 10s
hosts: ["${data.host}:8080"]
namespace: "app_metrics"
path: /actuator/metrics/jvm.memory.used
method: "GET"
Problem with approach is when ever microservices team add a new metric, i need to configure metric beat to include the added metric.
Instead is there any way to get all the metrics values in a single json?
{
"#timestamp": "2018-08-14T20:16:02.339Z",
"tomcat_global_received_count": 0,
"tomcat_global_request_max_value": 0.3569999933242798,
"tomcat_sessions_active_current_value": 0,
"tomcat_sessions_active_max_value": 0,
"tomcat_sessions_expired_count": 0,
"tomcat_sessions_alive_max_value": 0,
"jvm_memory_committed_value": 854982656,
"jvm_gc_pause_max": 0,
"jvm_gc_pause_count": 4,
"jvm_memory_used_value": 429412576,
"tomcat_global_request_count": 106,
"host": {
"name": "webapp003.my.network.example.com"
},
"jvm_threads_peak_value": 64,
"jvm_memory_max_value": 4518313984
}
I know that actuator supports a push mechanism to push the metrics directly to elastic search. But i cannot use this because i want additional details about kubernetes environment as part of the metrics which is supported by metricsbeat.

Related

Spring Cloud Gateway Circuit Breaker: Default and Specific

I have a Spring Boot application with Spring Cloud Gateway using Resilience4j for circuit breaker:
implementation 'org.springframework.cloud:spring-cloud-starter-gateway'
implementation 'org.springframework.cloud:spring-cloud-starter-circuitbreaker-reactor-resilience4j'
Currently I have the following routes:
routes:
- id: swapi-planets
uri: https://swapi.devs/api/planets/
predicates:
- Path=/api/planets/**
filters:
- name: CircuitBreaker
args:
fallbackUri: https://swapi.dev/api/species/
- id: swapi-species
uri: https://swapi.dev/api/species/
predicates:
- Path=/api/species/**
default-filters:
- name: CircuitBreaker
args:
name: myCircuitBreaker
When I do a call to swapi-planets route I receive the following return from myCircuitBreaker:
{
"timestamp": "2022-07-21T12:48:59.402+00:00",
"path": "/api/planets",
"status": 504,
"error": "Gateway Timeout",
"requestId": "8a8cdca2-1"
}
Is there a way to ignore default circuit breaker filter when the route already have a specific circuit breaker? In this case swapi-planets has a fallbackUri.

How to pass api endpoint parameter to lambda function in serverless

I created a lambda function behind an API gateway, and I am trying to implement a healthcheck for it as well, but right now I need to do this in 2 steps:
run serverless deploy so it spits out the endpoint for the api gateway.
manually insert the endpoint into the healthcheck.environment.api_endpoint param, and run serverless deploy a second time
functions:
app:
handler: wsgi_handler.handler
events:
- http:
method: ANY
path: /
- http:
method: ANY
path: '{proxy+}'
healthcheck:
environment:
api_endpoint: '<how to reference the api endpoint?>'
handler: healthcheck.handler
events:
- schedule: rate(1 minute)
custom:
wsgi:
app: app.app
pythonBin: python3
packRequirements: false
pythonRequirements:
dockerizePip: non-linux
Is there a way to get the reference to the api gateway on creation time, so it can be passed as an environment variable to the healthcheck app? the alternative I can think of, is to basically create a specific serverless.yaml just for the healthcheck purpose.
I noticed I can reconstruct the endpoint in the lambda, and grab the id like so:
healthcheck:
environment:
api_endpoint: !Ref ApiGatewayRestApi
region: ${env:region}
handler: healthcheck.handler
events:
- schedule: rate(1 minute)
and then reconstruct:
def handler(event, context):
api_id = os.environ['api_endpoint']
region = os.environ['region']
endpoint = f"https://{api_id}.execute-api.{region}.amazonaws.com/dev"
With a bit of cloudformation you can inject it directly! That way you don't need to compute it everytime in your lambda handler.
api_endpoint: {
'Fn::Join': [
'',
[
{ Ref: 'WebsocketsApi' },
'.execute-api.',
{ Ref: 'AWS::Region' },
'.',
{ Ref: 'AWS::URLSuffix' },
"/${opt:stage, self:provider.stage, 'dev'}",
],
],
},
(this is an example for a websocket API)
Is your API created through the same/another Cloudformation Stack? If so, you can reference is directly (same stack) or through a CloudFormation variable export.
https://carova.io/snippets/serverless-aws-cloudformation-output-stack-variables
https://carova.io/snippets/serverless-aws-reference-other-cloudformation-stack-variables
If you created it outside of CloudFormation, ie. in the aws console, then you'll need to add the ids into the template. Most likely by creating different environment variables based on the stage.

How to monitor docker services using elastic stack

I have a docker swarm running a number of services. I'm using the elastic stack (kibana, elastic, filebeat, etc) for monitoring.
For the business logic I'm writing logs and using filebeat to move them to logstash and analyze the data in kibana.
But I'm having trouble in monitoring the liveness of my docker services. some of them are deployed globally (like filebeat) and some of them have a number of replicas. I wan't to be able to see in kibana that the number of running containers is equal to the number that the service should have. I'm trying to use metricbeat with docker module, the most useful metricset I've found is container, but it doesn't seem to contain enough information for me to display or analyze the number of instances of a service.
I'd appreciate any advice how to achieve this.
The metricbeat config
metricbeat.autodiscover:
providers:
- type: docker
hits.enabled: true
metricbeat.modules:
- module: docker
enabled: true
metricsets:
- container
- healthcheck
- info
period: 10s
hosts: [ "unix:///var/run/docker.sock" ]
processors:
- add_docker_metadata: ~
- add_locale:
format: offset
output.logstash:
hosts: [ "mylogstash.com" ]
The metricset container log data (the relevant docker part)
...
"docker" : {
"container": {
"id": "60983ad304e13cb0245a589ce843100da82c5fv9e093aad68abb439cdc2f3044"
"status": "Up 3 weeks",
"command": "./entrypoint.sh",
"image": "registry.com/myimage",
"created": "2019-04-08T11:38:10.000Z",
"name": "mystack_myservice.wuiqep73p99hcbto2kgv6vhr2.mufs70y24k5388jxv782in18f",
"ip_addresses": [ "10.0.0.148" ]
"labels" : {
"com_dokcer_swarm_node_id": "wuiqep73p99hcbto2kgv6vhr2",
"com_docker_swarm_task_name": "stack_service.wuiqep73p99hcbto2kgv6vhr2.mufs70y24k5388jxv782in18f",
"com_docker_swarm_service_id": "kxm5dk43yzyzpemcbz23s21xo",
"com_docker_swarn_task_id": "mufs70y24k5388jxv782in18f",
"com_docker_swarm_task" : "",
"com_docker_stack_namespace": "mystack",
"com_docker_swarm_service_name": "mystack_myservice"
},
"size": {
"rw": 0,
"root_fs": 0
}
}
}
...
For future reference:
I wrote a bash script which runs by interval and write a json log for each of the swarm services. the script is wrapped in image docker service logger

Transfer httpenpoinds metrics using http module-metricbeat

In order to ship the metrics endpoints (/metric) of my spring-boot app ,I used http module in metricbeat , I'm following the official doc in elastic.io website to install and configure metricbeat, but unfrontenly, the transport of metrics was incorrect although the connection had been established.
.I changed fields.yml file to create a specific template for http module only
fields.yml
- key: http
title: "HTTP"
description: >
HTTP module
release: beta
settings: ["ssl"]
fields:
- name: http
type: group
description: >
fields:
- name: request
type: group
description: >
HTTP request information
fields:
- name: header
type: object
description: >
The HTTP headers sent
- name: method
type: keyword
description: >
The HTTP method used
- name: body
type: keyword
description: >
The HTTP payload sent
- name: response
type: group
description: >
HTTP response information
fields:
- name: header
type: object
description: >
The HTTP headers received
- name: code
type: keyword
description: >
The HTTP status code
example: 404
- name: phrase
type: keyword
example: Not found
description: >
The HTTP status phrase
- name: body
type: keyword
description: >
The HTTP payload received
- name: json
type: group
description: >
json metricset
release: beta
fields:
- name: server
type: group
description: >
server
release: experimental
fields:
metricbeat.yml
metricbeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
metricbeat.modules:
#------------------------------- HTTP Module -------------------------------
- module: http
metricsets: ["json"]
period: 10s
hosts: ["http://localhost:8080/metrics"]
namespace: "test_metrics"
method: "GET"
enabled: true
setup.template.overwrite: true
output.elasticsearch:
hosts: ["localhost:9200"]
My app metrics display:(http://localhost:8080/metrics)
{
"mem": 199405,
"mem.free": 74297,
"processors": 4,
"instance.uptime": 45240231,
"uptime": 45254636,
"systemload.average": -1,
"heap.committed": 154624,
"heap.init": 131072,
"heap.used": 80326,
"heap": 1842688,
"nonheap.committed": 45888,
"nonheap.init": 2496,
"nonheap.used": 44781,
"nonheap": 0,
"threads.peak": 31,
"threads.daemon": 25,
"threads.totalStarted": 35,
"threads": 27,
"classes": 6659,
"classes.loaded": 6659,
"classes.unloaded": 0,
"gc.ps_scavenge.count": 24,
"gc.ps_scavenge.time": 999,
"gc.ps_marksweep.count": 1,
"gc.ps_marksweep.time": 71,
"httpsessions.max": -1,
"httpsessions.active": 0,
"gauge.response.metrics": 20,
"gauge.response.unmapped": 6005,
"gauge.response.login": 1,
"gauge.response.star-star.favicon.ico": 1878,
"counter.status.200.star-star.favicon.ico": 1,
"counter.status.200.metrics": 30,
"counter.status.302.unmapped": 3,
"counter.status.200.login": 2
}
Previously, I used httpbeat and everything was great,the name of fields in elasticsearch index are compatible ... , since I've moved to use http module everything had been changed ,I'm using it to get a predefined dashboards in kibana .
Any help please ?

Multiple paths in http module - metricbeats

I am using http module of metricbeats to monitor jmx. I am using http module instead of the jolokia module because it lacks wildcard support at this point. The example configuration in the documents is as follows.
- module: http
metricsets: ["json"]
period: 10s
hosts: ["localhost:80"]
namespace: "json_namespace"
path: "/jolokia/"
body: '{"type" : "read", "mbean" : "kafka.consumer:type=*,client-id=*", "attribute" : "count"}'
method: "POST"
This works fine and I am able to get data to kibana. I see errors when I configure it as follows to call multiple paths.
- module: http
metricsets: ["json"]
enabled: true
period: 10s
hosts: ["localhost:80"]
namespace: "metrics"
method: POST
paths:
- path: "/jolokia/"
body: '{"type" : "read", "mbean" : "kafka.consumer:type=*,client-id=*", "attribute" : "bytes-consumed-rate"}'
- path: "/jolokia/"
body: '{"type" : "read", "mbean" : "kafka.consumer:type=*,client-id=*", "attribute" : "commit-latency-avg"}'
This does not seem to be the right config and I see that the http events have had failures.
2018/02/26 19:53:18.315740 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=4767600 beat.memstats.memory_alloc=4016168 beat.memstats.memory_total=47474256 libbeat.config.module.running=3 libbeat.output.read.bytes=4186 libbeat.output.write.bytes=16907 libbeat.pipeline.clients=7 libbeat.pipeline.events.active=0 libbeat.pipeline.events.published=18 libbeat.pipeline.events.total=18 libbeat.pipeline.queue.acked=18 metricbeat.http.json.events=3 metricbeat.http.json.failures=3
Documentation on how to setup http module: Example configuration
I had to query multiple URL's of my REST API and I could achieve that by having multiple modules of "http" with different host URL's following is the example:
- module: http
metricsets: ["json"]
period: 3600s
hosts: ["http://localhost/Projects/method1/"]
namespace: "testmethods"
method: "GET"
enabled: true
- module: http
metricsets: ["json"]
period: 3600s
hosts: ["http://localhost/Projects/method2/"]
namespace: "testmethods"
method: "GET"
enabled: true
This made me achieve have multiple paths for the same module
Multiple paths are not supported by the http module's json metricset.
What you found in the config example is for the http module's server metricset. This metricset does not query URLs. Instead it opens an http server on the specified port, and can receive input on multiple paths which are used to separate data into different namespaces.

Resources