okhttp interceptor chain created multiple times during process interceptor - okhttp

I'm learning the source code of okhttp3, i have a little confuse that why okhttp create multiple chains (RealInterceptorChain) in each interceptor. Looks like only need one chain to manage the interceptor list.

Each chain instance behaves differently when proceed() is called. For example, the 3rd chain is the only one that calls the 4th chain.

Related

Spring-retrial internal implementation

I was going through spring-retry framework tutorial : https://dzone.com/articles/how-to-use-spring-retry
But I wanted to know how it works internally. I want to use it for one of my API calls but before doing that wanted to know few things about the internal implementation which I had no luck finding out.
Does spring-retry saves the message in some messaging queue before retrying it after sometime?
Does it save it in some object in memory.
Does it use the same thread pool or a different one is used?
No, it simply inserts an interceptor between the caller and called code.
No; the arguments to the method call are simply stack variables.
The called code is called directly on the calling thread - it is just an interceptor.

Is the design correct using WebFlux Framework?

Have written a microservice(Using webFlux) which in turn calls three other microservices(Not using webflux). New microservice call the other three using flatmap. and controller is returning Mono . Is this correct Design. Do I need to pushlishOn?
Mono<String> result =service1.api(input)
.flatmap(innput-> service2.api).flatmap(input-> service3.api);
And Controller is also Returning Mono . Is the design correct. Will it be working in non-blocking way?
The publishOn method does not change code either into non-blocking or blocking code. All it does is force downstream operators to run in a different thread. However, if the service calls made within those threads are blocking, then you'll just be blocking another thread if you use publishOn.
So, to answer "Will it be working in non-blocking way", it actually depends on how you implemented service1.api(), service2.api() and service3.api(). If they synchronously fetch your data, then it will still be blocking, no matter what you do.
However, if you use the new WebClient API for example to reactively fetch your data from your three microservices properly, then yes, it should be non-blocking.

Cannot use 'subscribe' or 'subscribeWith' with 'ReactorNettyWebSocketClient' in Kotlin

The Kotlin code below successfully connects to a Spring WebFlux server, sends a message and prints each message sent via the stream that is returned.
fun main(args: Array<String>) {
val uri = URI("ws://localhost:8080/myservice")
val client = ReactorNettyWebSocketClient()
val input = Flux.just(readMsg())
client.execute(uri) { session ->
session.send(input.map(session::textMessage))
.thenMany(
session.receive()
.map(WebSocketMessage::getPayloadAsText)
.doOnNext(::println) // want to replace this call
.then()
).then()
}.block()
}
In previous experience with Reactive programming I have always used subscribe or subscribeWith where the call to doOnNext occurs. However it will not work in this case. I understand that this is because neither returns the reactive stream in use - subscribe returns a Disposable and subscribeWith returns the Subscriber it received as a parameter.
My question is whether invoking doOnNext is really the correct way to add a handler to process incoming messages?
Most Spring 5 tutorials show code which either calls this or log, but some use subscribeWith(output).then() without specifying what output should be. I cannot see how the latter would even compile.
subscribe and subscribeWith should always be used right at the end of a chain of operators, not as intermediate operators.
Simon already provided the answer but I'll add some extra context.
When composing asynchronous logic with Reactor (and ReactiveX patterns) you build an end-to-end chain of processing steps, which includes not only the logic of the WebSocketHandler itself but also that of the underlying WebSocket framework code responsible for sending and receiving messages to and from the socket. It's very important for the entire chain to be connected together, so that at runtime "signals" will flow through it (onNext, onError, or onComplete) from start to end and communicate the final result, i.e where you have the .block() at the end.
In the case of WebSocket this looks a little daunting because you're essentially combining two or more streams into one. You can't just subscribe to one of those streams (e.g. for inbound messages) because that prevents composing a unified processing stream, and signals will not flow through to the end where the final outcome is expected.
The other side of this is that subscribe() triggers consumption on a stream whereas what you really want is to keep composing asynchronous logic in deferred mode, i.e. declaring all that will happen when data materializes. This is another reason why composing a single unified chain is important. So it can be triggered after it is fully declared.
In short the main difference with the imperative WebSocketHandler for the Servlet world, is that instead of it being a handler for individual messages, this is a handler for composing the complete streams. Here the handling of an individual message is just one step of the overall processing chain. So the only place to subscribe is at the very end, where .block() is, in order to kick off processing.
BTW since this question was first posted a few months ago, the documentation has been improved to provide more guidance on how to implement a WebSocketHandler.

why does spring cloud stream's #InboundChannelAdapter accept no parameters?

I'm trying to use spring cloud stream to send and receive messages on kafka. The examples for this use a simple example of using time stamps as the messages. I'm trying to go just one step further into a real world application when I ran into this blocker on the InboundChannelAdapter docs:
"A method annotated with #InboundChannelAdapter can't accept any parameters"
I was trying to use it like so:
#InboundChannelAdapter(value = ChannelManager.OUTPUT)
public EventCreated createCustomerEvent(String customerId, String thingId) {
return new EventCreated(customerId, thingId);
}
What usage am I missing? I imagine that when you want to create an event, you have some data that you want to use for that event, and so you would normally pass that data in via parameters. But "A method annotated with #InboundChannelAdapter can't accept any parameters". So how are you supposed to use this?
I understand that #InboundChannelAdapter comes from spring-integration, which spring-cloud-stream extends, and so spring-integration may have a different context in which this makes sense. But it seems un-intuitive to me (as does using an _INBOUND_ChannelAdapter for an output/producer/source)
Well, first of all the #InboundChannelAdapter is defined exactly in Spring Integration and Spring Cloud Stream doesn't extend it. That's false. Not sure where you have picked up that info...
This annotation builds something like SourcePollingChannelAdapter which provides a poller based on the scheduler and calls periodically a MessageSource.receive(). Since there is no any context and end-user can't effect that poller's behavior with his own arguments, the requirement for empty method parameters is obvious.
This #InboundChannelAdapter is a beginning of the flow and it is active. It does its logic on background without your events.
If you would like to call some method with parameters and trigger with that some flow, you should consider to use #MessagingGateway: http://docs.spring.io/spring-integration/reference/html/messaging-endpoints-chapter.html#messaging-gateway-annotation
How are you expecting to call that method? I think there was a miscommunication with your statement "stream extends integration" and Artem probably understood that we extend #InboundChannelAdatper
So, if you are actively calling this method, as it appears since you do have arguments that are passed to it, why not just using your source channel to send the data?
Usually sources do not require arguments as they are either push like the twitter stream that taps on twitter, listen for events and pushes them to the source channel, or they are polled, in which case, they are invoked on an interval defined via a poller.
As Artem pointed, if your intention is to call this method from your business flow, and deal with the return while triggering a message flow, then check his link from the docs.

What is OncePerRequestFilter?

Documentation says org.springframework.web.filter.OncePerRequestFilter "guarantees to be just executed once per request". Under what circumstances a Filter may possibly be executed more than once per request?
Under what circumstances a Filter may possibly be executed more than once per request?
You could have the filter on the filter chain more than once.
The request could be dispatched to a different (or the same) servlet using the request dispatcher.
A common use-case is in Spring Security, where authentication and access control functionality is typically implemented as filters that sit in front of the main application servlets. When a request is dispatched using a request dispatcher, it has to go through the filter chain again (or possibly a different one) before it gets to the servlet that is going to deal with it. The problem is that some of the security filter actions should only be performed once for a request. Hence the need for this filter.
To understand the role of OncePerRequestFilter, we need to first clearly understand how a normal filter behaves.
When you want some specific code to execute just before or after servlet execution, you create a filter which works as:
code1 ===> servlet execution (using chain.doFilter()) ===> code2
So code1 executes before servlet and code2 after servlet execution.
But here, while servlet execution, there can be some other request to a different servlet and that different servlet is also having this same filter. In this case, this filter will execute again.
OncePerRequestFilter prevents this behavior. For our one request, this filter will execute exactly one time (no more no less). This behavior is very useful while working with security authentication.
A special kind of GenericFilterBean was introduced to live in Servlet 3.0 environment. This version added a possibility to treat the requests in separate threads. To avoid multiple filters execution for this case, Spring Web project defines a special kind of filter, OncePerRequestFilter. It extends directly GenericFilterBean and, as this class, is located in org.springframework.web.filter package. OncePerRequestFilter defines doFilter method. Inside it checks if given filter was already applied by looking for "${className}.FILTER" attribute corresponding to true in request's parameters. In additionally, it defines an abstract doFilterInternal((HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) method. Its implementations will contain the code to execute by given filter if the filter hasn't been applied.
Under what circumstances a Filter may possibly be executed more than once per request?
A filter may be invoked as part of a REQUEST or ASYNC dispatches that occur in separate threads. We should use OncePerRequestFilter since we are doing a database call to retrieve the principal or the authenticated user, there is no point in doing this more than once. After that, we set the principal to the security context.
Authentication auth = jwtTokenProvider.getAuthentication(token);
SecurityContextHolder.getContext().setAuthentication(auth);
where jwtTokenProvider is your service for getting authentication from the jwt token.
OncePerRequestFilter implements logic to make sure that the filter’s doFilter() method is executed only one time per request.

Resources