Is there a way to measure CPU usage and Utilization of different aspects (CPU, Thread, Memory etc) using dropwizard in spring-boot?
Use spring-boot-actuator for that. There is already a /metrics endpoint for the data you are asking for.
Check systemload.average, mem, mem.free, threads etc for the exact information.
For more information check:
https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-metrics.html
https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-metrics.html#production-ready-dropwizard-metrics
A default MetricRegistry Spring bean will be created when you declare
a dependency to the io.dropwizard.metrics:metrics-core library; you
can also register you own #Bean instance if you need customizations.
Users of the Dropwizard ‘Metrics’ library will find that Spring Boot
metrics are automatically published to
com.codahale.metrics.MetricRegistry. Metrics from the MetricRegistry
are also automatically exposed via the /metrics endpoint
When Dropwizard metrics are in use, the default CounterService and
GaugeService are replaced with a DropwizardMetricServices, which is a
wrapper around the MetricRegistry (so you can #Autowired one of those
services and use it as normal). You can also create “special”
Dropwizard metrics by prefixing your metric names with the appropriate
type (i.e. timer., histogram. for gauges, and meter.* for counters).
Related
io.micrometer contains disk space metrics (io.micrometer.core.instrument.binder.jvm.DiskSpaceMetrics) but it doesn't seems to be enabled by default. There are no metric data. How do i enable this metric that it can be used by prometheus?
Metrics about disk space are exposed as part of the health endpoint, which is provided by Spring Boot Actuator (dependency: org.springframework.boot:spring-boot-starter-actuator).
The health endpoint can be enabled as follows in the application.properties file (by default, it should be enabled):
management.endpoints.web.exposure.include=health
Then, you can enable detailed disk space information as follows:
management.endpoint.health.show-components=always
management.endpoint.health.show-details=always
management.health.diskspace.enabled=true
In production, you might want to use when_authorized instead of always, so that the information is not publicly available.
Finally, you can see the disk info through the HTTP endpoint /actuator/health.
More info in the official docs.
The same metrics for Prometheus will be added in a future Spring Boot version. There's an open PR to add auto configuration for that. In the meantime, you can configure a bean yourself taking inspiration from the PR.
#Bean
public DiskSpaceMetrics diskSpaceMetrics() {
return new DiskSpaceMetrics(new File("."));
}
We have a Spring Boot microservice which should get some data from old / legacy system. This microservice exposes external modern REST API. Sometimes we have to issue 7-10 requests to the legacy system in order to get all the data we need for single API call. Unfortunately we can't use Reactor / WebClient and have to stick with WebServiceTemplate to issue those "legacy" calls. We can't also use Reactive Spring WebClient - Making a SOAP call
What is the best way to scale such a miroservice in Kubernetes? We have very big concerns that Thread Pool used for parallel WebServiceTemplate invocation will be depleted very fast, but I'm not sure that creating and exposing custom metric based on active threads count / thread pool size is a good idea.
Any advice will be helpful.
Enable Prometheus exporter in Spring
Make sure metrics are scraped. You're going to watch for a threadpool_size metric. Refer your k8s/prometheus distro docs to get prometheus service discovery working for you.
Write a horizontal pod autoscaler (HPA) based on a Prometheus metric:
Setup Prometheus-Adapter and follow the HPA walkthrough.
Or follow this guide https://github.com/stefanprodan/k8s-prom-hpa
Depending on what k8s distro you are using, you might have different ways to get the Prometheus and prometheus discovery:
(example platform built-in) https://cloud.google.com/stackdriver/docs/solutions/gke/prometheus
(example product) https://docs.datadoghq.com/integrations/prometheus/
(example opensource) https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
any other prometheus solution
I have a Spring Boot 2 application that has caching with Caffeine cache manager already implemented. Caching is implemented in a standard way with #Cacheable, #CacheEvict, #CachePut annotations.
I migrated the app for using Redis to have caching distributed between pods.
The problem now is with metrics. Before migration Caffeine exposed cache metrics like cache_puts_total, cache_gets_total, etc. and now there is nothing. Is there something implemented for metrics in RedisCacheManager? I can not find anything.
Unfortunately as you can see in the Spring Boot documentation: 52. Metrics, Spring Boot does not provide cache statistics for Redis by default:
By default, Spring Boot provides cache statistics for EhCache, Hazelcast, Infinispan, JCache and Guava. You can add additional CacheStatisticsProvider beans if your favourite caching library isn’t supported out of the box.
As another alternative to implementing this yourself, you can use Redisson. This Redis client comes with an integration with Spring that exposes Prometheus metrics. Your metrics will look just like you hint:
# HELP cache_gets_total the number of times cache lookup methods have returned an uncached (newly loaded) value, or null
# TYPE cache_gets_total counter
cache_gets_total{cache="...",cacheManager="...",name="...",result="miss",} 0.0
cache_gets_total{cache="...",cacheManager="...",name="...",result="hit",} 0.0
# HELP cache_puts_total The number of entries added to the cache
# TYPE cache_puts_total counter
cache_puts_total{cache="...",cacheManager="...",name="...",} 0.0
I want to disable all builtin metrics (jvm, cpu, etc) but keep my custom metrics.
When I enabled Spring Boot Actuator metrics together with Datadog I end up with +320 metrics sent to datadog. Most of these metrics are from the builtin core metrics (JVM metrics, CPU metrics, File description metrics) only 5 of those metrics are my custom metrics that are the ones that I want to send to datadog.
According to this section of the Spring Boot documentation:
Spring Boot also configures built-in instrumentation (i.e. MeterBinder
implementations) that you can control via configuration or dedicated
annotation markers
but there is no direct example on how to exclude the those metrics
From what I found in this other SO question one way to control it is:
management.metrics.enable.all=false
management.metrics.enable.jvm=true
and that removes all the metrics except the JVM ones. But it also removes my custom metrics.
I don't see how can I reenable my custom metrics.
Just for the record the way I register the custom metrics is this way:
#Autowired
public void setMeterRegistry(MeterRegistry registry) {
this.meterRegistry = registry;
}
....
Counter n_event_in = this.meterRegistry.counter("n_events_in");
This works ok, as long as `management.metrics.enable.all=true
So how can I disable all core metrics , but keep my custom metrics?
Your metrics should have a common prefix like myapp.metric1, myapp.metric2, etc.
Then you can disable all metrics and enable explicitly all myapp.* metrics like so:
application.properties:
management.metrics.enable.all=false
management.metrics.enable.myapp=true
the management.metrics.enable.<your_custom_prefix> will enable all <your_custome_prefix>.* metrics.
If you want to enable some of the built-in core metrics again, for example reenabling jvm.*, you can do:
management.metrics.enable.all=false
management.metrics.enable.myapp=true
management.metrics.enable.jvm=true
I've created a sample project in github that disables core metrics, enables custom metrics, and jvm.* metrics and sends to Datadog.
I have a number of applications that are using the SpringBoot actuator to publish metrics to the /metrics endpoint.
I have some other applications that are also using Micrometer to publish metrics to a /prometheus endpoint.
And finally, I have a cloud provider that will only allow me to pull metrics from a single end point. They have many preprepared Grafana dashboards, but most are targeted at the Actuator variable names. Some are targeted at the Micrometer variable names.
Micrometer puts out the same data, but it uses different names than Actuator, eg "jvm_memory" instead of "mem".
I would really like to find a way to merge both of these data sources so that they dump data to a single endpoint, and all of my Grafana dashboards would just work with all of the applications.
But I'm at a loss as to the best way to do this. Is there a way to tell Micrometer to use /metrics as a datasource so that any time it is polled it will include those?
Any thoughts are greatly appreciated.
The best solution probably depends on the complexity of your dashboard. You might just configure a set of gauges to report the value under a different name and then only use the Micrometer scrape endpoint. For example:
#Bean
public MeterBinder mapToOldNames() {
return r -> {
r.gauge("mem", Tags.empty(), r, r2 -> r2.find("jvm.memory.used").gauges()
.stream().mapToDouble(Gauge::value).sum());
};
}
Notice how in this case we are converting a memory gauge that is dimensioned in Micrometer (against the different aspects of heap/non-heap memory) and rolling them up into one gauge to match the old way.
For Spring Boot 1.5 you could do something like the Prometheus `simpleclient_spring_boot' does.
You collect the PublicMetrics from the actuator-metrics context and expose/register them as Gauges/Counters in the Micrometer MeterRegistry. This in term will expose those actuator metrics under your Prometheus scrape endpoint.
I assume you'd filter out non-functional metrics which are duplicates of the Micrometer ones. So the only thing I can think of is functional/business metrics to actually take over. But if you have the chance to actually change the code to Micrometer, I'd say that's the better approach.
I haven't tried this, just remembered I had seen this concept.