I've used spring webflux as it was suggested to be used for httpClient. now I'm forced to save the request and response (JSON) in DB so I could track the history of service usage. and I don't want to block the response and reduce the performance.
some people have written about actuator, and others suggest using a filter. but until now I have not found a straight answer. Is it good to do something like this at all? and in my case what would be the best way to still keep the non-blocking system running ?
any recommendations ?
Related
Background : I am using spring boot with embedded jetty. My app calls bunch of rest apis. For calling these rest apis I use spring rest template.
Question: Is Spring rest template any good at high concurrency? Searching on the web search suggests moving to reactive but still there are apps which are written in blocking way and need to continue that way. Question is what alternate is there or what can be done to make rest template more responsive under heavy load. PoolingHttpClientConnectionManager improves things a bit but essentially still not at par with is required.
There are suggestions to move to rest easy and other http clients but no slid reasoning behind it. End of the day, they all make pool of connections and essentially works the same. Please note, reactive is not an option yet. This question is very specific to traditional blocking rest calls. Any suggestions in optimizing connection pooling or using rest template right will be of great help.
RestTemplate does not do an actual rest call by itself, its just a "wrapper" - a convenient API.
Now when it comes to connection pooling, by default it doesn't use any kind of pooling and just opens URL connections available in Java anyway. No third-parties are required, but performance is not so good.
You can configure rest template to use, say, OkHttp Client under the hood. See here for different ways to work with different clients. The interesting part is that its possible to configure connection pools there and achieve a better performance.
So you should really check what exactly the expected performance is and configure the connection pool accordingly.
Now one more thing about Reactive stuff - it won't give you a performance gain, however it will allow to serve better multiple concurrent requests by reusing resources more efficiently. However if you'll measure how long it takes to perform one single request - its not expected to be performed faster.
In other words you should consider the transition to reactive stack if the application has too many concurrent requests that it can't serve, but not if you want to process every single request faster.
Spring RestTemplate is used to write application level code. It obtains the HTTP connection from ClientHttpRequestFactory implementation which is what glues low-level HTTP client library to Spring e.g. HttpComponentsClientHttpRequestFactory for Apache HTTP Client.
Bottom line, in most cases you have to tune the underlying low-level HTTP client library and not RestTemplate when you are tuning outgoing requests to external APIs.
You are confusing a lot of concepts in your question. Try understanding what is Reactive programming, HTTP, HTTP pipelining, and TCP/IP before you start tuning anything. Otherwise you won't find where is your code's bottleneck and you will end up tuning wrong part of the software stack.
Its an old question asked by someone, e.g. how to log Spring 5 WebClient call, Spring Boot - How to log all requests and responses with exceptions in single place?
Above solutions have limitation, e.g. webclient solution can't log body info and not sure the log level can be dynamically changed without restart.
So, as a spring webclient consumer, because webclient wrap low level things for us, e.g. http client, servlet, http stuff, etc. So I think its natural for webclient to give us some additional info as titled, because these info is more often needed, especially when debugging, trouble shooting. you know sometime we print the entity bean may be not exactly the http messages send out or received. and also sometime we need to know the header info. all these info can be logged based on dynamically configurable log level. So its better webclient can supply an interface to get these info from somewhere and let consumer to use it convenient.
Thanks.
I have some microservices, which should work on top of WebFlux framework. Each server has own API with Mono or Flux. We are using MongoDB, which is supported by Spring (Spring Data MongoDb Reactive).
The problem is external blocking API, which I have to use in my system.
I have one solution. I can just wrap blocking API calls in dedicated thread pool and use it with CompletableFuture.
Is there anything else to solve my problem? I think, that brand new Rsocket cannot solve my problem.
1.If possible, you can change your blocking API call to the reactive way using the WebClient class.
References:
Reference guide
WebClient API
A simple, complete sample
2.If the blocking API can't be changed to reactive ones, we should have a dedicated, well-tuned thread pool and isolate the blocking code there.
There is also an example here.
I don't see why you cannot wrap a blocking API call in a Flux or a Mono. You can also integrate Akka with Spring if the actor model seems easier to you.
RSocket should be a perfect fit, good tutorials to get you started
https://www.baeldung.com/spring-boot-rsocket
https://spring.io/blog/2020/04/06/getting-started-with-rsocket-spring-boot-channels
I have implemented some routes in JBOSS Fuse which are exposed as REST Web service. I want to implement cache for web services. Lets say if request for same username for specific resource in specific time span return the cached response. Doing some research i got to know about camel cache component. I tried to read about it to check if camel component will help me in getting my objective done or not but got nothing on which i can decide.
If any one can suggest me any approach how to cache response on basis of request or if camel cache component can be used. If yes then suggest any startup tutorial for this.
You could use Camel EhCache. There's a "getting started" on the docs. But you may take a look at the unit tests from this component here.
That way you'll have a more detailed approach of how to use it. For example, the cache manager should leverage directly from the EhCache API:
CacheManagerBuilder.newCacheManagerBuilder()
.withCache(
"myCache",
CacheConfigurationBuilder.newCacheConfigurationBuilder(
String.class,
String.class,
ResourcePoolsBuilder.newResourcePoolsBuilder()
.heap(100, EntryUnit.ENTRIES)
.offheap(1, MemoryUnit.MB))
).build(true)
Cheers!
The case for event-driven microservices such as Spring Cloud Stream is their asynchronous nature, which I do agree it makes them more scalable
But I have an issue regarding how to code it in a way where I don't lose certain key features that I have access to using synchronous services
In a servlet-based MS, I make full use of servlet context variables and servlet-based Spring autowiring functions
For e.g., I leverage heavily on HTTP headers to carry metadata between microservices without having to impact the payload. But in Spring Cloud Stream using Kafka, Kafka doesn't support message headers of any kind! I lose that immediately if I use SCS. Putting them into the payload causes all sort of changes in my model classes if I define the attributes clearly. Yes, I can use a simple Hashmap to simulate the HTTP header object but it really seems like reinventing the wheel to me.
On the auto-wiring side: I maintain an audit log record per request, which I implement by declaring a request-scoped Hashmap bean and autowiring it into any methods in the Servlet's call stack that needs to append data to the audit log. Basically it's just a global variable to hold some data within a single request. But in SCS, again, I lose that cos bean scopes that leverage on servlets are not available.
So far, there seems to be a lot of trade-offs that I have to make just to make Spring Cloud Stream work for me.
I thought about an alternative approach where I use SCS just to create an entry point but the Source method would just get the event, use a Processor to construct a HTTP request and send the request along to a HTTP endpoint. But, why go through all that trouble then?
Hoping that some more experienced devs would be able to shed some light on how they leverage on SCS.
#feicipet Thanks for the detailed question. let me try to address some of your concerns in the order you have listed them:
+1
+1
I am not sure why you are referring to it as servlet-based instead of Spring-based? Those are features provided by Spring, but read on. . .
Spring Cloud Stream doesn't use Kafka, the end user does while Spring Cloud Stream provides Kafka binder allowing Spring Cloud Stream to integrate with Kafka. Further more, while Kafka indeed did not support headers prior to version 0.11, Spring Cloud Stream always supported and will continue support headers even with Kafka pre-0.11, embedding them in the Message and then extracting them in the consumer side into the proper Message headers completely transparent to the end user. In other words one would assume that Kafka did support headers by simply using Spring Cloud Stream. With Kafka 0.11+ headers are supported natively and we have adjusted to that with the same level of transparency.
So, you don't need to put anything in the payload. Just create an appropriate Message<payload, headers> and SCSt will take care of the rest regardless of the broker (Kafka, Rabbit, Foo etc.).
Yes you do simply due to the fact that as you eluded earlier SCSt promotes an asynchronous and stateless architecture. However, I do not agree that what you are trying to accomplish is un-accomplishable. Rather it is accomplishable the way you are describing, but there are other way to maintain context and I would be more then glad to discuss it as a separate topic.
I would not call them trade-offs, rather difference in the architecture, that has its benefits, but it is a not one-size-fits-all architecture and therefore its viability should be discussed within the context of a concrete use case.
+1. You don't have to separate it as Source and Processor. You can simply create a custom Source app with exposed REST endpoint and custom processing logic. However we are currently working on enhancements i the framework to ensure that you could do the same with the existing starter apps.
Obviously we have touched on many points here and some of them would probably need to be debated further, but I hope this clears up some of your concerns.
Cheers