I have a project with MassTransit (7.2.2) + RabbitMQ + Hangfire with enabled MassTransit prometheus metrics via MassTransit.Prometheus. I've noticed that messages sent or published by scheduler consumer do not update metrics. It results in no mt_publish_total and mt_send_total metrics for scheduled messages. Metric mt_delivery_duration_seconds also show infinite delivery time for such messages. Can anybody expected similar issue?
It looks like a bug, but maybe any additional configuration is required.
All run within single service.
Scheduler is added by AddMessageScheduler method.
Hangfire is added by UseHangfireScheduler method.
Metrics are enabled by UsePrometheusMetrics method and exposed by promethous-net via endpoint within the same application.
Lack of the specified metrics we can be observed on the /metrics endpoint.
Registration code looks like this:
services.AddMassTransit(config =>
{
config.AddMessageScheduler(new Uri("queue:my-scheduler")
config.UsingRabbitMq((ctx, cfg) =>
{
cfg.UsePrometheusMetrics(serviceName: "my.service.name");
cfg.UseHangfireScheduler("my-scheduler");
});
});
Related
I have a spring-boot application (with default Tomcat server) producing metrics for Prometheus using micrometer. The application is configured with graceful shutdown.
When shutting down I want to make sure Prometheus scrapes one last time to get the final metrics but since the server stops accepting new requests Prometheus scrape request won't be handled.
I've looked at Prometheus PushGateway which allows me to explicitly push the metrics to Prometheus although I wanted to avoid it and was looking for some solution from Spring end. Is there anyway I can avoid blocking Prometheus scrape requests upon graceful shutdown? Probably overriding some shutdown event handler...
I cannot find any reference on implementation of getting metrics.
Can Someone help with an example and references?
As stats_example says here, You can get stats listed in STATISTICS.md. But clearly mentioned in the example comments, You need to implement metrics
Stats events are emitted as JSON (as string). Either directly forward
the JSON to your statistics collector, or convert it to a map to
extract fields of interest.
So in this case, In your application, you need to implement metrics collector something like prometheus
And if you want full broker side metrics, You can implement Kafka monitoring As Kafka Documentation explained here
Kafka uses Yammer Metrics for metrics reporting in the server. The
Java clients use Kafka Metrics, a built-in metrics registry that
minimizes transitive dependencies pulled into client applications.
Both expose metrics via JMX and can be configured to report stats
using pluggable stats reporters to hook up to your monitoring system.
I have an application running Kafka consumers and want to monitor the processing time of each message consumed from the topic. The application is a Spring boot application and exposes Kafka consumer metrics to Spring Actuator Prometheus endpoint using micrometre registry.
Can I use kafka_consumer_commit_latency_avg_seconds or kafka_consumer_commit_latency_max_seconds to monitor or alert?
Those metrics have nothing to do with record processing time. spring-kafka provides metrics for that; see here.
Monitoring Listener Performance
Starting with version 2.3, the listener container will automatically create and update Micrometer Timer s for the listener, if Micrometer is detected on the class path, and a single MeterRegistry is present in the application context. The timers can be disabled by setting the ContainerProperty micrometerEnabled to false.
Two timers are maintained - one for successful calls to the listener and one for >failures.
I'm trying to get micrometer metrics data to Splunk. Each metric endpoint gives the current value of a metric, so I would need Splunk to send a http request to my application periodically, or I can write the metric values to a log file periodically.
So how do I get my application to write the metric values to logs?
If you are in spring boot 2.x and Micrometer is of version 1.1.0+ you can create a bean of
periodic (1 minute) special logging registry see (https://github.com/micrometer-metrics/micrometer/issues/605)
#Bean
LoggingMeterRegistry loggingMeterRegistry() {
return new LoggingMeterRegistry();
}
This is by far the easiest way to log everything via logging system.
Another alternative is creating a scheduled job that will run some method on a bean with injected metering registry that will iterate over all the metrics (with possibly filtering out the metrics that you won't need) and preparing the log of your format.
If you think about this, this is exactly what the metrics endpoint of spring boot actuator does, except returning the data via http instead of writing to log.
Here is an up-to-date implementation of the relevant endpoint from the spring boot actuator source
I am working on adding logging/monitoring functionality for multiple spring integration deployments. I want to create a logger transaction at the start of the workflow and close the log transaction at the end of the workflow. At the end of the logger transaction I will send out the logs and metrics to a centralized logging server. At the end of the day I want to see the logs for all the messages that went through workflows across multiple spring integration deployments.
The spring integration deployments are across a lot of teams and I cant count on each team to add the logging code for me, so I want to write code that will run across the spring integration deployments.
To start a log transaction,the solution was to use a global-channel interceptor on a set of inbound messaging channels. All the workflows built across deployments use the same set of inbound channels, so the start log transaction interceptor will run.
Also I pass the logger transaction details as part of the message headers
But I am having trouble figuring out a solution for ending the transaction. The workflows can be synchronous as well as asynchronous. Also not all endpoints within the workflow will have a output channel.
Any strategies/ideas on how can I close the logger transaction ?
An example of a sample workflow is shown in the image below:
When you have a flow that ends with a channel adapter (or other endpoint that produces no result), make the last channel a publish-subscribe-channel; subscribe a second endpoint to that channel that terminates the "transaction".
I generally prefer to add an order attribute on such endpoints - make the regular endpoint order="1" and the flow terminator order="2" - clearly indicating it is called after the main endpoint.
It is important that no task executor is added to the channel so the endpoints are called serially on the same thread.