I have two services, ping and pong where ping sends requests to pong. This metric shows up on the /metrics endpoint for the ping service:
gauge.servo.hystrix.hystrixcommand.http://pong.pongclient#hello().90
but it doesn't appear on the /prometheus endpoint. Other metrics appear on this endpoint, but not the servo metrics with information about Feign/Hystrix http requests.
How do I get those metrics to appear on the /prometheus endpoint?
I have the following dependencies on my build.gradle
compile 'org.springframework.boot:spring-boot-starter-web'
compile 'org.springframework.boot:spring-boot-starter-actuator'
compile 'org.springframework.cloud:spring-cloud-starter-eureka'
compile 'org.springframework.cloud:spring-cloud-starter-hystrix'
compile 'org.springframework.cloud:spring-cloud-starter-feign'
compile 'org.springframework.retry:spring-retry'
compile "io.micrometer:micrometer-core:${micrometerVersion}"
compile "io.micrometer:micrometer-spring-legacy:${micrometerVersion}"
compile "io.micrometer:micrometer-registry-prometheus:${micrometerVersion}"
with the following versions
springCloudVersion = 'Dalston.SR4'
micrometerVersion = '1.0.0-rc.4'
The code can be found here https://github.com/fiunchinho/spring-resiliency
You need to manually add the plugin for Hystrix:
HystrixPlugins.getInstance().registerMetricsPublisher(new MicrometerMetricsPublisher(Metrics.globalRegistry));
You could add it in a #PostConstruct in a configuration.
I've created https://github.com/micrometer-metrics/micrometer/issues/237 to address the shortcoming in the future.
checketts' answer did not work for me and threw a java.lang.IllegalStateException: Another strategy was already registered. on startup, but adding the HystrixMetricsBinder bean, which does more or less the same internally, did the trick.
#Configuration
public class MetricsConfig {
#Bean
HystrixMetricsBinder registerHystrixMetricsBinder() {
return new HystrixMetricsBinder();
}
}
taken from https://stackoverflow.com/a/52740957/60518
You need to instrument and configure the spring boot services to monitor with prometheus as follows:
You need to include a dependency in the gradle.build file
You need to implement Metric endpoint
You need to add the standard exporters from Prometheus JVM
For more implementation details on how to do it see the examples here and also here
Related
My team is trying to get Spring Cloud sleuth to work with the Opentelemetry api. We observe that the spans, attributes (tags) and events are exported just fine to the OTEL collector.
The metrics we add are not exported with the spans (or separately) which would be our expectation.
We have the following dependencies in our project:
implementation platform('org.springframework.cloud:spring-cloud-sleuth-otel-dependencies:1.1.1')
implementation('io.opentelemetry:opentelemetry-api')
implementation('org.springframework.cloud:spring-cloud-sleuth-api')
...
implementation 'org.springframework.cloud:spring-cloud-sleuth-otel-autoconfigure'
implementation('org.springframework.cloud:spring-cloud-starter-sleuth') {
exclude group: 'org.springframework.cloud', module: 'spring-cloud-sleuth-brave'
}
implementation 'io.opentelemetry:opentelemetry-exporter-otlp:1.22.0'
The code that adds metrics is as follow:
#Autowired
private final OpenTelemetry openTelemetry;
//...
//make two attempts - one at both api's. The first is just using a NOOP meter provider unfortunately.
DoubleCounter doubleCounter = GlobalOpenTelemetry.getMeter("io.opentelemetry.example.metrics")
.counterBuilder("calculated_used_space")
.setDescription("Counts disk space used by file extension.")
.setUnit("MB")
.ofDoubles()
.build();
doubleCounter.add(2.0);
//this is using a SDK meter provider which should export metrics with the span.
DoubleCounter build = openTelemetry.getMeter("com.jysk.some-app.some-metric")
.counterBuilder("some-counter")
.setDescription("some-description")
.setUnit("pcs.")
.ofDoubles()
.build();
build.add(4.0f);
Any insight in how we get Spring Cloud Sleuth to export metrics to a configured collector. We are using the Opentelemetry Collector.
This is regarding CDI spec of quarkus. Would want to understand is there a configuration bean for quarkus? How does one do any sort of configuration in quarkus?
If I get it right the original question is about #Configuration classes that can contain #Bean definitions. If so then CDI producer methods and fields annotated with #javax.enterprise.inject.Produces are the corresponding alternative.
Application configuration is a completely different question though and Jay is right that the Quarkus configuration reference is the ultimate source of information ;-).
First of all reading how cdi spec of quarkus differs from spring is important.
Please refer this guide:
https://quarkus.io/guides/cdi-reference
The learnings from this guide is there is #Produces which is an alternative to #Configuration bean in Quarkus.
Let us take an example for libs that might require a configuration through code. Example: Microsoft Azure IOT Service Client.
public class IotHubConfiguration {
#ConfigProperty(name="iothub.device.connection.string")
String connectionString;
private static final Logger LOG = Logger.getLogger(IotHubConfiguration.class);
#Produces
public ServiceClient getIot() throws URISyntaxException, IOException {
LOG.info("Inside Service Client bean");
if(connectionString==null) {
LOG.info("Connection String is null");
throw new RuntimeException("IOT CONNECTION STRING IS NULL");
}
ServiceClient serviceClient = new ServiceClient(connectionString, IotHubServiceClientProtocol.AMQPS);
serviceClient.open();
LOG.info("opened Service Client Successfully");
return serviceClient;
}
For all libs vertically intergrated with quarkus application.properties can be used and then you will get a driver obj for that broker/dbs available directly through #Inject in your #applicationScoped/#Singleton bean So, Why is that?
To Simplify and Unify Configuration
To Make Sure no code is required for configuring anything i.e. database config, broker config , quarkus config etc.
This drastically reduces the amount of code written for configuring and also Junits needed to cover that code.
Let us take an example where kafka producer configuration needs to be added: in application.properties
kafka.bootstrap.servers=${KAFKA_BROKER_URL:localhost:9092}
mp.messaging.outgoing.incoming_kafka_topic_test.topic=${KAFKA_INPUT_TOPIC_FOR_IOT_HUB:input_topic1}
mp.messaging.outgoing.incoming_kafka_topic_test.connector=smallrye-kafka
mp.messaging.outgoing.incoming_kafka_topic_test.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
mp.messaging.outgoing.incoming_kafka_topic_test.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
mp.messaging.outgoing.incoming_kafka_topic_test.health-readiness-enabled=true
For full blown project reference: https://github.com/JayGhiya/QuarkusExperiments/tree/initial_version_v1/KafkaProducerQuarkus
Quarkus References for Config:
https://quarkus.io/guides/config-reference
Example for reactive sql config: https://quarkus.io/guides/reactive-sql-clients
Now let us talk about a bonus feature that quarkus provides which improves developer experience by atleast an order of magnitude that is profile driven development and testing.
Quarkus provides three profiles:
dev - Activated when in development mode (i.e. quarkus:dev)
test - Activated when running tests
prod - The default profile when not running in development or test
mode
Let us just say that in the given example you wanted to have different topics for development and different topics for production. Let us achieve that!
%dev.mp.messaging.outgoing.incoming_kafka_topic_test.topic=${KAFKA_INPUT_TOPIC_FOR_IOT_HUB:input_topic1}
%prod.mp.messaging.outgoing.incoming_kafka_topic_test.topic=${KAFKA_INPUT_TOPIC_FOR_IOT_HUB:prod_topic}
This is how simple it is. This is extremely useful in cases where your deployments run with ssl enabled brokers/dbs etc and for dev purposes you have unsecure local brokers/dbs. This is a game changer.
I have a fairly simple Java app that listens to a Kafka topic for JSON messages.
These are the main dependencies and versions:
id 'org.springframework.boot' version '2.3.5.RELEASE'
...
set('springCloudVersion', "Hoxton.SR9")
...
implementation 'org.springframework.cloud:spring-cloud-stream'
implementation 'org.springframework.cloud:spring-cloud-stream-binder-kafka'
The application.properties config that specifies the JSON format:
spring.cloud.stream.bindings.listener-in-0.content-type = application/json
And the "core loop":
#Bean
public Consumer<Message<MyDataModel>> listener() {
return message -> {
...
And it works like a charm. But now I'm trying to add a /metrics endpoint to the app, with the Actuator library:
implementation 'org.springframework.boot:spring-boot-starter-web'
implementation 'org.springframework.boot:spring-boot-starter-actuator'
After adding these to build.gradle, without changing anything in the code itself, the consumer in the above snippet fails to deserialize the incoming messages, every field of the model object is null.
Clearly, the spring-boot-starter-web package overwrites the JSON handling mechanism that came with the spring-cloud-stream library, but I have no idea what to do. Experimented with excluding parts of the web-starter library and changing around the springBoot version, but no success yet.
Upgrading the springboot version to 2.4.2 from 2.3.5.RELEASE, and spring-cloud version to 2020.0.1 from Hoxton.SR9 solved the issue for us.
I am trying to set up Jaeger to collect traces from a spring boot application. When my app starts up, I am getting this warning message
warn io.jaegertracing.internal.senders.SenderResolver - No sender factories available. Using NoopSender, meaning that data will not be sent anywhere!
I use this method to get the jaeger tracer
#Bean
Tracer jaegerTracer(#Value(defaulTraceName) String service)
{
SamplerConfiguration samplerConfig = SamplerConfiguration.fromEnv().withType("const").withParam(1);
ReporterConfiguration reporterConfig = ReporterConfiguration.fromEnv().withLogSpans(true);
Configuration config = new Configuration(service).withSampler(samplerConfig).withReporter(reporterConfig);
return config.getTracer();
}
I have manually instrumented the code, but no traces show up in the jaeger UI. I have been stuck on this problem for a few days now and would appreciate any help given!
In my pom file, I have dependencies on jaeger-core and opentracing-api
Solved by adding dependency in pom file on jaeger-thrift.
I would like to disable the rabbit health check in my default RabbitMockConfiguration.
We have a Configuration that is imported via #Import. Unfortunately the Configuration does not prevent the health check from being added to the health indicator as that happens once spring-rabbit is in the classpath.
We have the workaround, that we add a properties file in every service using that Configuration, which disables the property management.health.rabbit.enabled, but for us it would be much nicer to be able to disable that heathcheck on configuration level.
I thought about the tests with #TestPropertySource(properties = ["management.health.rabbit.enabled=false"]), but I could not find an equivalent to use for the a #Configuration, as #PropertySource expects a location for a properties file and does not accept single properties.
Any idea what we can do?
Spring boot version: 2.2.4
Spring amqp version: 2.2.3
Spring Version: 5.2.3
If you want to change the behaviour of the health check, I'd rather override the health check so that it states Rabbit is in mock mode.
To do so, just create a HealthIndicator bean named rabbitHealthIndicator:
#Bean
public HealthIndicator rabbitHealthIndicator() {
return () -> Health.up().withDetail("version", "mock").build();
}
This has the effect of switching the production one and exposes the fact the app is running with a mock.
I guess you should add 'ApplicationListener' and add the implementation to 'src/main/resources/META-INF/spring.factories' to your module with MockReddisConfiguration. This is described in more detail here