Spring Actuator Metrics generate logs - spring

I'm trying to get micrometer metrics data to Splunk. Each metric endpoint gives the current value of a metric, so I would need Splunk to send a http request to my application periodically, or I can write the metric values to a log file periodically.
So how do I get my application to write the metric values to logs?

If you are in spring boot 2.x and Micrometer is of version 1.1.0+ you can create a bean of
periodic (1 minute) special logging registry see (https://github.com/micrometer-metrics/micrometer/issues/605)
#Bean
LoggingMeterRegistry loggingMeterRegistry() {
return new LoggingMeterRegistry();
}
This is by far the easiest way to log everything via logging system.
Another alternative is creating a scheduled job that will run some method on a bean with injected metering registry that will iterate over all the metrics (with possibly filtering out the metrics that you won't need) and preparing the log of your format.
If you think about this, this is exactly what the metrics endpoint of spring boot actuator does, except returning the data via http instead of writing to log.
Here is an up-to-date implementation of the relevant endpoint from the spring boot actuator source

Related

Reliable way to publish metrics using micrometer and Spring Cloud Functions

I am working on a metric collector which uses Micrometer and we've decided to use it as a serverless function due to the nature of the metrics it is collecting.
We are using Kotlin with Spring Cloud Functions and the AWS Adapter.
We have a very simple function using the the Bean method from the docs. In Micrometer, the metrics are usually sent on a schedule based on a configured step (1m, 30s etc).
However, because this is a serverless function we want to send them as the Lambda is invoked obviously - I've attempted to do this by listening on the ContextClosedEvent from Spring where I manually close the Micrometer registry which sends the metrics to our backend.
When doing this I expected that there would be a new/different context for each lambda invocation but it looks like after the initial cold-start, the warm-start invocations look like they share some context, or that context isn't being re-created/instantiated on the invocations?
Can you offer an insight into if this is the case and expected outcome and perhaps a potential more reliable way to close the micrometer registry as this current pattern can cause metrics to be dropped as the context doesn't always exist and therefore the Micrometer registry is closed.
Thanks!
MeterRegistry has a close method that you can implement. Also, depending on which registry you extend, you can find a stop method too (close should call stop).

Spring Boot Micrometer Influxdb custom data insert

I have a Spring Boot Application which consumes data from kafka topic. I am using Micrometer and Influxdb for monitoring purpose. I read in documentation that, By adding micrometer-registry-influx we automatically enable exporting data to InfluxDB. I have some below questions on this -
What kind of data micrometer automatically adds to InfluxDB?
Can we add custom data to InfluxDB according to my application?
How can I publish custom or my application specific data to InfluxDB?
How can I disable adding default data to InfluxDB?
As I understand from the documentation, the standard output set is described here
Adding your own metrics
Metrics filter (here you can exclude standard metrics accordingly)

How to find the processing time of Kafka messages?

I have an application running Kafka consumers and want to monitor the processing time of each message consumed from the topic. The application is a Spring boot application and exposes Kafka consumer metrics to Spring Actuator Prometheus endpoint using micrometre registry.
Can I use kafka_consumer_commit_latency_avg_seconds or kafka_consumer_commit_latency_max_seconds to monitor or alert?
Those metrics have nothing to do with record processing time. spring-kafka provides metrics for that; see here.
Monitoring Listener Performance
Starting with version 2.3, the listener container will automatically create and update Micrometer Timer s for the listener, if Micrometer is detected on the class path, and a single MeterRegistry is present in the application context. The timers can be disabled by setting the ContainerProperty micrometerEnabled to false.
Two timers are maintained - one for successful calls to the listener and one for >failures.

Merge Spring Boot actuator and Micrometer data on one endpoint

I have a number of applications that are using the SpringBoot actuator to publish metrics to the /metrics endpoint.
I have some other applications that are also using Micrometer to publish metrics to a /prometheus endpoint.
And finally, I have a cloud provider that will only allow me to pull metrics from a single end point. They have many preprepared Grafana dashboards, but most are targeted at the Actuator variable names. Some are targeted at the Micrometer variable names.
Micrometer puts out the same data, but it uses different names than Actuator, eg "jvm_memory" instead of "mem".
I would really like to find a way to merge both of these data sources so that they dump data to a single endpoint, and all of my Grafana dashboards would just work with all of the applications.
But I'm at a loss as to the best way to do this. Is there a way to tell Micrometer to use /metrics as a datasource so that any time it is polled it will include those?
Any thoughts are greatly appreciated.
The best solution probably depends on the complexity of your dashboard. You might just configure a set of gauges to report the value under a different name and then only use the Micrometer scrape endpoint. For example:
#Bean
public MeterBinder mapToOldNames() {
return r -> {
r.gauge("mem", Tags.empty(), r, r2 -> r2.find("jvm.memory.used").gauges()
.stream().mapToDouble(Gauge::value).sum());
};
}
Notice how in this case we are converting a memory gauge that is dimensioned in Micrometer (against the different aspects of heap/non-heap memory) and rolling them up into one gauge to match the old way.
For Spring Boot 1.5 you could do something like the Prometheus `simpleclient_spring_boot' does.
You collect the PublicMetrics from the actuator-metrics context and expose/register them as Gauges/Counters in the Micrometer MeterRegistry. This in term will expose those actuator metrics under your Prometheus scrape endpoint.
I assume you'd filter out non-functional metrics which are duplicates of the Micrometer ones. So the only thing I can think of is functional/business metrics to actually take over. But if you have the chance to actually change the code to Micrometer, I'd say that's the better approach.
I haven't tried this, just remembered I had seen this concept.

How can I get the current number of client request threads in spring boot embedded tomcat?

I'd like to get the current number of active client request threads in a spring boot app using embedded tomcat so that I can expose it over actuator's metrics endpoint. I'm not looking for active sessions, but active request processing threads. Preferably, I'd like to get this data per connector as well.
Does anyone have any ideas on a good way to get at this information in spring boot?
I don't know if this is what you are looking for, but you can get serveral values like that via JMX. You can start you current Spring Boot app and open Java Mission Control ([JDK directory]/bin). Open MBean browser and have a look at Tomcat->Thread Pool->[ConnectorName]:
You can get those values programmatically, too.

Resources