Log metrics in Springboot applicaiton using LoggingMeterRegistry - spring-boot

I have some metrics being created in my springboot application and want to log the details of the metrics in a specific format as below:
2018-07-06 17:00:28,884 [metrics-logger-reporter-thread-1] [INFO] (com.codahale.metrics.Slf4jReporter) type=METER, name=io.dropwizard.jetty.MutableServletContextHandler.async-timeouts, count=0, mean_rate=0.0, m1=0.0, m5=0.0, m15=0.0, rate_unit=events/second
I'm currently using LoggingMeterRegistry to log my metrics but the format they get printed it is:system.cpu.usage{} value=0.041922","logger_name":"io.micrometer.core.instrument.logging.LoggingMeterRegistry"
I know that the required format can be achieved through codahale.dropwizard metrics but i want the same using LoggingMeterRegistry from "micrometer" since thats what i am using to calculate my other metrics.
Any help?

Related

spring boot and datadog getting metrics

I have configured the following in my application but i cant seem to get some metrics from http.servlet.request published to datadog.
Spring 2.7.1
implementation("com.datadoghq:dd-trace-ot:0.98.1")
implementation("com.datadoghq:dd-trace-api:0.98.1")
implementation("io.micrometer:micrometer-core:1.9.2")
implementation("io.micrometer:micrometer-registry-statsd:1.9.2")
My props is as follows
management.endpoint.health.probes.enabled=true
management.metrics.export.datadog.step=1m
management.metrics.distribution.percentiles-histogram.http.server.requests=true
management.metrics.export.datadog.descriptions=true
management.metrics.web.server.request.autotime.enabled=true
management.endpoint.metrics.enabled=true
management.metrics.enable.logback=true
management.metrics.tags.application=my-app
management.metrics.export.statsd.enabled=true
management.metrics.export.statsd.host=${DD_HOST}
management.metrics.export.statsd.flavor=datadog
management.metrics.export.statsd.port=8125
So i can see http.servlet.request, but i cant seem to get http.servlet.request.max or http.servlet.request.count.. i really wanted latency is that even possible?
im using tomcat with spring as per standard, Anyone got any ideas why this would be the case why im not seeing

Disable spring application metrics

My spring boot application pushes spring_data_repository_invocations_percentile metrics automatically. The application has grown very big over time and has lots of repository classes and this is causing issues with the Prometheus DB. Is there a way I can disable specific metrics like spring_data_repository_invocations_percentile or http_client_requests_percentile itself?
Sample metrics being pushed:
ad_manager_api_spring_data_repository_invocations_percentile{BusinessUnit="ads", Project="ads", TechTeam="ads", application="ad-manager-api", businessunit="ads", environment="staging", exception="None", exported_application="ad-manager-api", exported_environment="staging", instance="103.18.2.19:9273", job="ads-nodes-mumbai", method="findAllAppEvents", metric_type="gauge", phi="0.95", repository="MoEventRepository", state="SUCCESS", statistic="value", techteam="ads"}

Spring Boot Sleuth - TraceI vs TraceIdString

I am learning about sleuth tracing. And while running the application, I could see logs with trace Id (ec88298d62773aa6) along with spandId and application name. What I want to know is
ID available in logs is traceIdString and not traceId ?
What is the difference between the two ?
And during logs analysis, should we consider traceId or traceIdString?
Sample log
2021-10-07 16:35:04.421 INFO [demo,ec88298d62773aa6,ec88298d62773aa6] 1324 --- [nio-8080-exec-1] com.example.demo.demo.DemoApplication : inside controller method
Thanks for your response.
traceIdString is the hex representation of the traceId as can be seen here: https://github.com/openzipkin/brave/blob/master/brave/src/main/java/brave/propagation/TraceContext.java#L218
During problem analysis you will usually see the hex representation in logs or the user interfaces of distributed tracing systems.

How to log the trace id using Springboot custom filters

I have few custom filters in my Springboot Webflux API. This project has been activated with Spring Sleuth, however, these filters are not logging the trace and span ids in the log messages.
I made sure that the order was set properly for these filters.
Example:
2020-03-23 12:53:18.895 -2020-03-23 12:53:18.895 INFO [my-spring-boot,,,] 9569 --- [ctor-http-nio-3] c.d.a.f.test.myTestEnvWebFilter : Reading data from header
Can someone please provide your insights on this?
It might be due to the default sampler percentage configuration, take a look at this article for an example:
https://www.baeldung.com/tracing-services-with-zipkin

How to Add DataDog trace ID in Logs using Spring Boot + Logback

OK, I spent quiet some time figuring out how to configure stuff to have DataDog trace ID in logs but couldn't get it working. To be clear what I'm looking for is to see trace IDs in logs message, the same way that adding spring-cloud-starter-sleuth to the classpath, automatically configure Slf4j/Logback to show trace IDs in log messages.
Where I've started:
We've got a simple web spring boot application running as a Docker container deployed as an AWS Elastic BeansTalk, whose logs go to CloudWatch and we read them there.
We have DataDog as a Java agent (thus no dependencies in pom.xml)
We have SLF4J/Logback in our dependencies list.
There's no other related depndencies (like dd-trace-ot or any opertracing libs)
What I did so far:
I found on SO that adding opentracing-spring-cloud-starter will add log integration automatically. But I couldn't get it working.
On DD website, it says configuring the pattern is enough to see the IDs, but in our case it didn't work. (is it because we don't have our logs a JSON?). Also, adding dd-trace-ot didn't help.
Notes:
We can't switch to JSON logs.
We can't switch to any other library (e.g. Slueth).
We can't go away from CloudWatch.
Can someone tell me how exactly I need to configure the application to see trace IDs in log messages? Is there any documentation or samples I can look at?
Do you have the ability to add some parameters in the logs sent. From the documentation you should be able to inject the trace id into your logs in a way that Datadog will interpret them.
You can also look at a parser to extract the trace id and span id from the raw log. This documentation should help you out on that.
From the documentation, if you don't have JSON logs, you need to include dd.trace_id and dd.span_id in your formatter:
If your logs are raw formatted, update your formatter to include
dd.trace_id and dd.span_id in your logger configuration:
<Pattern>"%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L -
%X{dd.trace_id:-0} %X{ dd.span_id:-0} - %m%n"</Pattern> ```
So if you add %X{dd.trace_id:-0} %X{ dd.span_id:-0}, it should work.

Resources