Using AddMassTransitHostedService a healthcheck configuration is added, but it always report unhealth even after I configured the endpoints as in the example. My project is a WebApi where I don't have my consumers in a separated startup.
You're using AddBus, which has been deprecated in favor of UsingRabbitMQ in v7.
When using AddBus, you have to configure the health check manually (it is done automatically when using the new v7 syntax). The previous syntax documentation shows how to configure it, in short:
cfg.UseHealthCheck(context);
Must be added so that the health checks are reported to the hosted service.
Related
I wrote some micro-services using Quarkus that communicate via Artemis. Now I want to add OpenTelemetry for tracing purpose.
What I already tried is to call service B from service A using HTTP/REST. Here the trace id from service A is automatically added to the header of the HTTP request and used in service B. So this works fine. In Jaeger I can see the correlation.
But how can this be achieved using Artemis as messaging system? Do I have to (manually) add the trace id from service A into the message and read it in service B to setup somehow the context (don't know whether this is possible)? Or is there possibly an automatism like for HTTP requests?
I would appreciate any assistance.
I have to mention at this point that I have little experience with tracing so far.
There is no quarkus, quarkiverse extension or smallrye lib that provides integration with Artemis and OpenTelemetry, yet.
Also, OpenTelemetry massaging spec is being worked at the moment, because the correct way to correlate sent, received messages and services is under definition at the OTel spec level.
However, I had exactly the same problem as you and did a manual instrumentation that you can use as inspiration: quarkus-observability-demo-activemq
It will correlate the sent service as parent of receiving end.
I am playing around with Quarkus and I am trying to create ingestion service, which sends data to kafka or another REST endpoint. I have added "quarkus-smallrye-reactive-messaging-kafka" and "quarkus-reactive-messaging-http" dependencies to the project. I wanted to have only one particular pipeline ie http->kafka or http->http at a time, but I should be able to change that using configuration update followed by restart. I could achieve this by adding 2 dependencies and configurations as shown below
## Rest service configuration
mp.messaging.outgoing.messages.connector=smallrye-http
mp.messaging.outgoing.messages.method=POST
mp.messaging.outgoing.messages.url=http://localhost:9009/messages
## Kafka Ingestion configuration
## ----------------------------
#mp.messaging.outgoing.messages.connector=smallrye-kafka
#kafka.bootstrap.servers=host.docker.internal:9092
#mp.messaging.outgoing.messages.topic=messages
#mp.messaging.outgoing.messages.value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
Now the problem is that even though I have Kafka connector commented out in my application.propertues, the health check for Kafka still runs and shows that Kafka is down. I expect that it should not have run the Kafka health check since I don't have configured as per the configuartion. Is this possible now and if not does it make sense to consider it as feature request and include it ?
Regards,
Health check for Kafka is disabled by default.
But the health check for reactive messaging is enabled by default, you can disable it via mp.messaging.outgoing.messages.heath-enabled=false.
Note that, for your use case, you can also use different channels and disabling the one you didn't use instead of commenting out the configuration.
Disabling a channel can be done simply via mp.messaging.outgoing.messages.enabled=false.
I have two docker instances that I launch with docker-compose.
One holds a Cassandra instance
One holds a Spring Boot application that tries to connect to that application.
However, the Spring Boot application will always fail, because it's trying to connect to a Cassandra instance that is not ready yet to take connections.
I have tried:
Using restart:always in Docker-compose
This still doesn't always work, because the Cassandra might be up 'enough' to no longer crash the Spring Boot application, but not up 'enough' to have successfully created the Table/Column family. On top of that, this is a very hacky solution.
Using healthcheck
It seems like healthcheck in compose doesn't have restart capabilities
Using a bash script as entrypoint
In the hope that I could use netstat,ping,... whatever to determine that readiness state of Cassandra
Right now the only thing that really works is using that same bash script and sleep the process for x seconds, then start the jar. This is even more hacky...
Does anyone have an idea on how to solve this?
Thanks!
Does the spring boot service defined in the docker-compose.yml depends_on the cassandara service? If yes then the service is started only if the cassandra service is ready.
https://docs.docker.com/compose/compose-file/#depends_on
Take a look at this github repository, to find a healthcheck for the cassandra service.
https://github.com/docker-library/healthcheck
CONCLUSION
After some discussion we found out that docker-compose seems not to provide a functionality for waiting until services are up and healthy, such as Kubernetes and Openshift provide (See comments below). They recommend to use wrapper script (docker-entrypoint.sh) which waits for the depending service to come up, which make binaries necessary, the actual service shouldn't use such as the cassandra client binary. Additionally the service depending on cassandra could never get up if cassandra doesn't, which shouldn't happen.
A main thing with microservices is that they have to be resilient for failures and are not supposed to die or not to come up if a depending service is currently not available or unexpectedly disappears. Therefore the microservice should be implemented in a way so that it retries to get connection after startup or an unexpected disappearance. Unexpected is a word actually wrongly used in this context, because you should always expect such issues in a distributed environment, and even with docker-compose you will face issues like that as discussed in this topic.
The following link points to a tutorial which helped to integrate cassandra properly into a spring boot application. It provides a way to implement the retrieval of a cassandra connection with a retry behavior, therefore the service is resilient to a non existing cassandra database and will not fail to start anymore. Hope this helps others as well.
https://dzone.com/articles/containerising-a-spring-data-cassandra-application
I have done all the possible matches and mix-up of dependency and still not able to record traces in zipkin ans store it in MYSQL using RabbitMQ.
Still i can see the trace and span id's in console and nothing beyond this.
Someone please take a look at the code in github from below location.
Github code: https://github.com/javayp/distributed-tracing-1
You've mixed almost everything you could have mixed. On the app side you're using both the deprecated zipkin server and the deprecated client. On the server side you're using deprecated zipkin server.
My suggestion is that you go through the documentation https://cloud.spring.io/spring-cloud-static/Edgware.SR3/single/spring-cloud.html#_spring_cloud_sleuth and read that the stream servers are deprecated and you should use the openzipkin zipkin server with rabbitmq support (https://github.com/openzipkin/zipkin/tree/master/zipkin-collector/rabbitmq).
On the consumer side use https://cloud.spring.io/spring-cloud-static/Edgware.SR3/single/spring-cloud.html#_sleuth_with_zipkin_via_rabbitmq_or_kafka . It really is as simple as that. Also don't forget to turn on the sampling percentage to 1.0
I am trying to register a Wildfly Swarm REST service to a running Consule agent, but it's not working correctly.
I am able to register a service (I can see it in the Consul ui), but somehow the health checks are not working.
The Swarm Server keeps frequently telling me, that "sending the check" failed due to "HTTP 405 Method not allowed". I can see simular logs in the Consule console, that GET method is not allowed.
I am at a dead end: My application is not working, nor does the Wildfly Swarm example (same exception). I also configured a CORS filter on both sides just to be sure, but thats not working either.
I am using Wildfly Swarm 2017.10.1 and Consul 1.0.0.
I hope you guys can help.
Best regards
I figured it out myself. Obviously, it wasn't that hard ^^
I checked the version of the Consul Client API which is used for my Wildfly Swarm version: It's 0.9.16. I've downloaded all Consul versions and checked which one are compatible. I can verify that all versions up to 0.9.3 are working.
Consul 1.0.0 has some very critical breaking changes and I really don't understand why they were not implemented in a HTTP API v2, but thats not the point here.
I highly recommend to upgrade the Consul Client API used by the topology-consul fraction to a newer version like 0.16.5 or 0.17.0.
At least, please add a note in the README for the ribbon-consul example what Consul versions can be used.