New to Quarkus and need to create a POC which replicates current java servlet filter and client interceptors which use ThreadLocal maps and aspectJ to propagate and examine data between downstream and upstream request/responses across multiple asynch calls.
Where can I find example code in Quarkus which does something similar.
Related
I'm working a project with tens of services, using Spring sleuth and zipkin, but I was wondering if there is any way to conditionally propagate logs to zipkin server.
Actually, would be perfect if the log was propagated only when The distributed transaction failed, (like using a saga pattern). The case is, we have a huge workload (millions of request per hour) and we are interest only in failed request.
You can't propagate logs to Zipkin, you can publish Spans.
Depending on your needs, you can use a SamplerFunction, a Sampler or a SpanHandler, see this answer: https://stackoverflow.com/a/69981877/971735
My application is simple 3-tier Spring Boot rest web-service with usual synchronous endpoints.
But since the period of getting response from downstream system where my service sends requests is quite long (kind of 60 seconds), I need to add support of asynchronous REST calls to my service to save upstream systems from a response awaiting. In other words, if a response to a downstream system is going to take more than 60 seconds (timeout), then the upstream system break the connection with my service and keeps its doing...
But when the response come, my service using "reply-to" header from the upstream system will send the response to the upstream system.
All the things above are kind of call back or webhook.
But I didn't find any examples of implementation.
How to implement this mechanism?
How can I find more information?
Does Spring Boot have something to implement it out-of-box?
Thank you for attention!
You can use the #Async annotation from Spring. You will also need to enable in your application this by setting #EnableAsync.
An important note is that your method with the #Async annotation should be on a different class from where it is being called. This will let the Spring proxy intercept the call and effectively do it asynchronous.
Please find here the official tutorial.
How spring cloud sleuth works behind the scenes?. Will it be able to trace library calls also?. Im having a spring boot project A which uses library B. Any hit to endpoints in project A is traced by sleuth, but when I try to hit rest controller on Library B sleuth fails to trace the request. Is there a way to tell sleuth to trace library calls also?
Spring Cloud Sleuth up till version 2.0 has its own tracer and since 2.0 it's reusing https://github.com/openzipkin/brave . A tracer is responsible for passing of the tracing context. What does it mean? It means that you have to propagate the tracing information:
what is the current span (span is a unit of work)
what is the current trace (trace is an id for all unit of works forming a single business operation)
Having the tracing information propagated in-process (between different libraries, threads etc.), and out of process (via Http headers, messaging etc.), Sleuth is able to process this information and store it locally in order to continue the trace.
You need to use the SpanInjector and SpanExtractor (prior to Sleuth 2.0) or Brave's handler mechanisms to properly handle receiving and sending of spans.
The case for event-driven microservices such as Spring Cloud Stream is their asynchronous nature, which I do agree it makes them more scalable
But I have an issue regarding how to code it in a way where I don't lose certain key features that I have access to using synchronous services
In a servlet-based MS, I make full use of servlet context variables and servlet-based Spring autowiring functions
For e.g., I leverage heavily on HTTP headers to carry metadata between microservices without having to impact the payload. But in Spring Cloud Stream using Kafka, Kafka doesn't support message headers of any kind! I lose that immediately if I use SCS. Putting them into the payload causes all sort of changes in my model classes if I define the attributes clearly. Yes, I can use a simple Hashmap to simulate the HTTP header object but it really seems like reinventing the wheel to me.
On the auto-wiring side: I maintain an audit log record per request, which I implement by declaring a request-scoped Hashmap bean and autowiring it into any methods in the Servlet's call stack that needs to append data to the audit log. Basically it's just a global variable to hold some data within a single request. But in SCS, again, I lose that cos bean scopes that leverage on servlets are not available.
So far, there seems to be a lot of trade-offs that I have to make just to make Spring Cloud Stream work for me.
I thought about an alternative approach where I use SCS just to create an entry point but the Source method would just get the event, use a Processor to construct a HTTP request and send the request along to a HTTP endpoint. But, why go through all that trouble then?
Hoping that some more experienced devs would be able to shed some light on how they leverage on SCS.
#feicipet Thanks for the detailed question. let me try to address some of your concerns in the order you have listed them:
+1
+1
I am not sure why you are referring to it as servlet-based instead of Spring-based? Those are features provided by Spring, but read on. . .
Spring Cloud Stream doesn't use Kafka, the end user does while Spring Cloud Stream provides Kafka binder allowing Spring Cloud Stream to integrate with Kafka. Further more, while Kafka indeed did not support headers prior to version 0.11, Spring Cloud Stream always supported and will continue support headers even with Kafka pre-0.11, embedding them in the Message and then extracting them in the consumer side into the proper Message headers completely transparent to the end user. In other words one would assume that Kafka did support headers by simply using Spring Cloud Stream. With Kafka 0.11+ headers are supported natively and we have adjusted to that with the same level of transparency.
So, you don't need to put anything in the payload. Just create an appropriate Message<payload, headers> and SCSt will take care of the rest regardless of the broker (Kafka, Rabbit, Foo etc.).
Yes you do simply due to the fact that as you eluded earlier SCSt promotes an asynchronous and stateless architecture. However, I do not agree that what you are trying to accomplish is un-accomplishable. Rather it is accomplishable the way you are describing, but there are other way to maintain context and I would be more then glad to discuss it as a separate topic.
I would not call them trade-offs, rather difference in the architecture, that has its benefits, but it is a not one-size-fits-all architecture and therefore its viability should be discussed within the context of a concrete use case.
+1. You don't have to separate it as Source and Processor. You can simply create a custom Source app with exposed REST endpoint and custom processing logic. However we are currently working on enhancements i the framework to ensure that you could do the same with the existing starter apps.
Obviously we have touched on many points here and some of them would probably need to be debated further, but I hope this clears up some of your concerns.
Cheers
I was experimenting with reactive web framework.I have certain question regarding how it will work.
In typical application,we have datastore(Relational or No SQL).
Application Layer(Controllers) to connect to store and get the Data.
Client Layer(Calls your API End Points) and get the data.
To best of my knowledge,there are no async or reactive drivers published by Vendors.Only Mongo and may be Cassandra has reactive drivers).
Controller layer will beam back the data using Mono or Flux or Single.
Client layer will be consuming this data.
Since HTTP is synchronous in nature,how will client layer or application benefit from reactive support in Spring.
Question:Let us says I have 10 records in JSON coming from my Flux response.Does it mean,my client will get data in stream or entire data set will be fetched first at client side and then process of consuming it will be reactive in nature.Currently ,we have InputStream as a response of service call,which is blocking in nature,due to design of HTTP protocol.
Question:Does it then make sense to have reactive architecture for typical web application,when very medium on which we are going to get response is Blocking in Nature.
Spring Web Reactive makes use of Servlet 3.1 non-blocking I/O and runs on Servlet 3.1 containers. It also runs on non-Servlet runtimes such as Netty and Undertow. Each runtime is adapted to a set of shared, reactive ServerHttpRequest and ServerHttpResponse abstractions that expose the request and response body as Flux with full backpressure support on the read and the write side.
Source:
https://docs.spring.io/spring-framework/docs/5.0.0.M1/spring-framework-reference/html/web-reactive.html
Datastore vendors and OSS communities are working on that. There's already support for Cassandra, Couchbase, MongoDB and Redis in Spring Data Kay.
I think you're conflating the HTTP protocol itself and blocking Java APIs. You're not getting the full HTTP request or response in one big block, so the HTTP procotol is not synchronous. The underlying networking library you choose also drives the choice between blocking or non-blocking I/O.
Now about your HTTP client question: if you're using WebClient, the returned Flux will emit elements as soon as they're available. The underlying libraries are reading and decoding messages as soon as possible, while still respecting backpressure.
I'm not sure I get your last question - but if you're wondering when and why you should use a reactive approach: this approach has benefits if you're already running into scalability/efficiency issues, or if your application is communicating with many external services and is then sensitive to latency. See more about that in the Spring Framework 5.0 FAQ.