Is OpenTracing enabled for Reactive Routes in Quarkus? - quarkus

I have recently changed my Quarkus application from RestEasy to Reactive Routes to implement my HTTP endpoints.
My Quarkus app had OpenTracing enabled and it was working fine. After changing the HTTP resource layer I can not see any trace in Jaeger.
After setting log level in DEBUG I can see my application is registered in Jaeger but I don't see any traceId or spanId in logs neither traces in Jaeger:
15:44:36 DEBUG traceId=, spanId=, sampled= [io.qu.ja.ru.JaegerDeploymentRecorder] (main) Registering tracer to GlobalTracer JaegerTracer(version=Java-0.34.3, serviceName=employee, reporter=RemoteReporter(sender=HttpSender(), closeEnqueueTimeout=1000), sampler=ConstSampler(decision=true, tags={sampler.type=const, sampler.param=true}), tags={hostname=employee-8569585469-tg8wg, jaeger.version=Java-0.34.3, ip=10.244.0.21}, zipkinSharedRpcSpan=false, expandExceptionLogs=false, useTraceId128Bit=false)
15:45:03 INFO traceId=, spanId=, sampled= [or.se.po.re.EmployeeResource] (vert.x-eventloop-thread-0) getEmployees
I'm using the latest version of Quarkus which is 1.9.2.Final.
Is it enabled OpenTracing when I'm using Reactive Routes?

Tracing is enabled by default for JAX-RS endpoints only, not for reactive routes at the moment. You can activate tracing by annotating your route with #org.eclipse.microprofile.opentracing.Traced.

Yes, adding #Traced enable to activate tracing on reactive routes.
Unfortunately, using both JAX-RS reactive and reactive routes bugs the tracing on event-loop threads used by JAX-RS reactive endpoint when they get executed.
I only started Quarkus 2 days ago so i don't really the reason of this behavior (and whether it's normal or it's a bug), but obviously switching between two completely mess up the tracing.
Here is an example to easily reproduce it:
Create a REST Easy reactive endpoint returning an empty Multi
Create a custom reactive route
set up the IO threads to 2 (easier to quickly reproduce it)
Run the application, and request the two endpoints alternatively
Here is a screenshot that show the issue
As you can see, as soon as the JAX-RS resource is it and executed on one of the two threads available, it "corrupts" it, messing the trace_id reported (i don't know if it's the generation or the reporting on logs that is broken) on logs for the next calls of the reactive route.
This does not happen on the JAX-RS resource, as you can notice on the screenshot as well. So it seems to be related to reactive routes only.
Another point here is the fact that JAX-RS Reactive resources are incorrectly reported on Jaeger. (with a mention to a missing root span) Not sure if it's related to the issue but that's also another annoying point.
I'm thinking to completely remove the JAX-RS Reactive endpoint and replace them by normal reactive route to eliminate this bug.
I would appreciate if someone with more experience than me could verify this or tell me what i did wrong :)
EDIT 1: I added a route filter with priority 500 to clear the MDC and the bug is still there, so definitely not coming from MDC.
EDIT 2: I opened a bug report on Quarkus
EDIT 3: It seems related to how both implementations works (thread locals versus context propagation in actor based context)
So, unless JAX-RS reactive resources are marked #Blocking (and get executed in a separated thread pool), JAX-RS reactive and Vertx reactive routes are incompatible when it comes to tracing (but also probably the same for MDC related informations since MDC is also thread related)

Related

OpenTelemetry: Context propagation using messaging (Artemis)

I wrote some micro-services using Quarkus that communicate via Artemis. Now I want to add OpenTelemetry for tracing purpose.
What I already tried is to call service B from service A using HTTP/REST. Here the trace id from service A is automatically added to the header of the HTTP request and used in service B. So this works fine. In Jaeger I can see the correlation.
But how can this be achieved using Artemis as messaging system? Do I have to (manually) add the trace id from service A into the message and read it in service B to setup somehow the context (don't know whether this is possible)? Or is there possibly an automatism like for HTTP requests?
I would appreciate any assistance.
I have to mention at this point that I have little experience with tracing so far.
There is no quarkus, quarkiverse extension or smallrye lib that provides integration with Artemis and OpenTelemetry, yet.
Also, OpenTelemetry massaging spec is being worked at the moment, because the correct way to correlate sent, received messages and services is under definition at the OTel spec level.
However, I had exactly the same problem as you and did a manual instrumentation that you can use as inspiration: quarkus-observability-demo-activemq
It will correlate the sent service as parent of receiving end.

Registration of dynamic websocket at application initialization time and at runtime has different endpoints exposed

I am trying to register websocket dynamically.For instance, i have registered '/sampleEndpoint' at runtime so ServerWebSocketContainer will register it and start publishing data on that endpoint. But now if i do the same process during application initialization time in PostConstruct than i am unable to connect to '/sampleEndpoint' but have to append '/websocket' at the end so url become '/sampleEndpoint/websocket' when connecting from client side. Why we are getting different endpoints at different situations?
I have attached github url to the code.
https://github.com/pinkeshsagar-harptec/code-sample.git
Well, that's how that SockJS option works: https://docs.spring.io/spring-framework/docs/current/reference/html/web.html#websocket-fallback.
If you client is not SockJS, then you have to add that /websocket sub-path.
Not sure though why it doesn't work for dynamically registered endpoints...
In the case of #PostConstruct it is not dynamic: we still do the stuff within configuration phase of the application context, so it is able to add our endpoint into a static HandlerMapping. The dynamic nature is applied a bit later, when all the #PostConstruct have done their logic. You don't need to start that flow registration manually though since the auto-startup phase has not passed yet withing #PostConstruct handling.
Re. IntegrationDynamicWebSocketHandlerMapping: it sounds more like a bug and I need to investigate more. I guess you still use there that SockJS option and it has to be applied for dynamic endpoint as well.
Thank you for your patience! I'll investigate and fix it ASAP.
UPDATE
The fix is here: https://github.com/spring-projects/spring-integration/pull/3581.

Thread model for Async API implementation using Spring

I am working on the micro-service developed using Spring Boot . I have implemented following layers:
Controller layer: Invoked when user sends API request
Service layer: Processes the request. Either sends request to third-part service or sends request to database
Repository layer: Used to interact with the
database
.
Methods in all of above layers returns the CompletableFuture. I have following questions related to this setup:
Is it good practice to return Completable future from all methods across all layers?
Is it always recommended to use #Async annotation when using CompletableFuture? what happens when I use default fork-join pool to process the requests?
How can I configure the threads for above methods? Will it be a good idea to configure the thread pool per layer? what are other configurations I can consider here?
Which metrics I should focus while optimizing performance for this micro-service?
If the work your application is doing can be done on the request thread without too much latency, I would recommend it. You can always move to an async model if you find that your web server is running out of worker threads.
The #Async annotation is basically helping with scheduling. If you can, use it - it can keep the code free of the references to the thread pool on which the work will be scheduled. As for what thread actually does your async work, that's really up to you. If you can, use your own pool. That will make sure you can add instrumentation and expose configuration options that you may need once your service is running.
Technically you will have two pools in play. One that Spring will use to consume the result of your future, and another that you will use to do the async work. If I recall correctly, Spring Boot will configure its pool if you don't already have one, and will log a warning if you didn't explicitly configure one. As for your worker threads, start simple. Consider using Spring's ThreadPoolTaskExecutor.
Regarding which metrics to monitor, start first by choosing how you will monitor. Using something like Spring Sleuth coupled with Spring Actuator will give you a lot of information out of the box. There are a lot of services that can collect all the metrics actuator generates into time-based databases that you can then use to analyze performance and get some ideas on what to tweak.
One final recommendation is that Spring's Web Flux is designed from the start to be async. It has a learning curve for sure since reactive code is very different from the usual MVC stuff. However, that framework is also thinking about all the questions you are asking so it might be better suited for your application, specially if you want to make everything async by default.

Spring boot Zuul server logging

I just created simple Zuul Proxy at the front end for our microservices environment but now I wanted to log all the entries into the log file which went through the proxy.
Do any properly which I need to enable to do this.
I assume an implementation of zuul as a regular spring boot driven microservice with a bunch of netflix's beans running under the hood.
In this case it can run on tomcat (probably for other services the idea is the same, but the technical implementation might be different).
So for tomcat:
As a first resort you can take advantage of tomcat feature of "access logs" that logs all the requests anyway. It also allows some level of customizations (what to log). The technical difficulty is that tomcat access log is not by default managed by logback, so you'll have to use some kind of adapter.
Here you can find ideas of how to resolve this technically and integrate access log with logback.
An another approach would be creating a Filter that will extract required pieces and log the request / response / whatever you want to log
Here is an example of creating a custom filter like this.
Of course I you also need to log something from response you should configure the filter type (see the java code example in the link)
One tip / caution: think about performance implications, so that this feature won't slow down the processing if the server is under high load of requests.

Development compromises in using Spring Cloud Stream

The case for event-driven microservices such as Spring Cloud Stream is their asynchronous nature, which I do agree it makes them more scalable
But I have an issue regarding how to code it in a way where I don't lose certain key features that I have access to using synchronous services
In a servlet-based MS, I make full use of servlet context variables and servlet-based Spring autowiring functions
For e.g., I leverage heavily on HTTP headers to carry metadata between microservices without having to impact the payload. But in Spring Cloud Stream using Kafka, Kafka doesn't support message headers of any kind! I lose that immediately if I use SCS. Putting them into the payload causes all sort of changes in my model classes if I define the attributes clearly. Yes, I can use a simple Hashmap to simulate the HTTP header object but it really seems like reinventing the wheel to me.
On the auto-wiring side: I maintain an audit log record per request, which I implement by declaring a request-scoped Hashmap bean and autowiring it into any methods in the Servlet's call stack that needs to append data to the audit log. Basically it's just a global variable to hold some data within a single request. But in SCS, again, I lose that cos bean scopes that leverage on servlets are not available.
So far, there seems to be a lot of trade-offs that I have to make just to make Spring Cloud Stream work for me.
I thought about an alternative approach where I use SCS just to create an entry point but the Source method would just get the event, use a Processor to construct a HTTP request and send the request along to a HTTP endpoint. But, why go through all that trouble then?
Hoping that some more experienced devs would be able to shed some light on how they leverage on SCS.
#feicipet Thanks for the detailed question. let me try to address some of your concerns in the order you have listed them:
+1
+1
I am not sure why you are referring to it as servlet-based instead of Spring-based? Those are features provided by Spring, but read on. . .
Spring Cloud Stream doesn't use Kafka, the end user does while Spring Cloud Stream provides Kafka binder allowing Spring Cloud Stream to integrate with Kafka. Further more, while Kafka indeed did not support headers prior to version 0.11, Spring Cloud Stream always supported and will continue support headers even with Kafka pre-0.11, embedding them in the Message and then extracting them in the consumer side into the proper Message headers completely transparent to the end user. In other words one would assume that Kafka did support headers by simply using Spring Cloud Stream. With Kafka 0.11+ headers are supported natively and we have adjusted to that with the same level of transparency.
So, you don't need to put anything in the payload. Just create an appropriate Message<payload, headers> and SCSt will take care of the rest regardless of the broker (Kafka, Rabbit, Foo etc.).
Yes you do simply due to the fact that as you eluded earlier SCSt promotes an asynchronous and stateless architecture. However, I do not agree that what you are trying to accomplish is un-accomplishable. Rather it is accomplishable the way you are describing, but there are other way to maintain context and I would be more then glad to discuss it as a separate topic.
I would not call them trade-offs, rather difference in the architecture, that has its benefits, but it is a not one-size-fits-all architecture and therefore its viability should be discussed within the context of a concrete use case.
+1. You don't have to separate it as Source and Processor. You can simply create a custom Source app with exposed REST endpoint and custom processing logic. However we are currently working on enhancements i the framework to ensure that you could do the same with the existing starter apps.
Obviously we have touched on many points here and some of them would probably need to be debated further, but I hope this clears up some of your concerns.
Cheers

Resources