Spring reactive reading web client response - spring

I am in the process of learning about Spring Reactive and have the following basic reactive demo code.
import org.springframework.web.reactive.function.client.WebClient;
// other imports etc
#Slf4j
class WebClientTests {
private static String baseUrl = "http://localhost:8080";
private static WebClient client = WebClient.create(baseUrl);
#Test
void testWebClient() {
Instant start = Instant.now();
Flux.just(1,2,3)
.map(i -> client.get().uri("/person/{id}", i).retrieve().bodyToFlux(Person.class))
.subscribe(s -> {
log.info("subscribed: {}", s);
});
log.info("Elapsed time: " + Duration.between(start, Instant.now()).toMillis() + "ms");
}
}
It outputs the following.
20:32:55.251 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4
20:32:55.652 [main] INFO com.example.reactive.reactivedemo.WebClientTests - subscribed: MonoFlatMap
20:32:55.652 [main] INFO com.example.reactive.reactivedemo.WebClientTests - subscribed: MonoFlatMap
20:32:55.652 [main] INFO com.example.reactive.reactivedemo.WebClientTests - subscribed: MonoFlatMap
20:32:55.668 [main] INFO com.example.reactive.reactivedemo.WebClientTests - Elapsed time: 84ms
However I am unsure why its not outputting the value of the get request? Its not actually triggering the endpoint.

You almost certainly want to use flatMap(), not map() on your .map(i -> client.get().uri... line.
map() is used for synchronous transformations, where you're returning the actual value you want to map to. You're not returning an actual value - you're returning a publisher from your map method, so that publisher is just returned as is - it's never subscribed to, and since nothing happens until you subscribe, your web request never executes.
flatMap() is used for non-blocking transformations where you return a publisher that emits the value, or values, you want to map to. That publisher is subscribed to as part of your reactive chain, and the value emitted by that publisher passed down the chain to the next operator.

Related

Immediately return first emitted value from two Monos while continuing to process the other asynchronously

I have two data sources, each returning a Mono:
class CacheCustomerClient {
Mono<Entity> createCustomer(Customer customer)
}
class MasterCustomerClient {
Mono<Entity> createCustomer(Customer customer)
}
Callers to my application are hitting a Spring WebFlux controller:
#PostMapping
#ResponseStatus(HttpStatus.CREATED)
public Flux<Entity> createCustomer(#RequestBody Customer customer) {
return customerService.createNewCustomer(entity);
}
As long as either data source successfully completes its create operation, I want to immediately return a success response to the caller, however, I still want my service to continue processing the result of the other Mono stream, in the event that an error was encountered, so it can be logged.
The problem seems to be that as soon as a value is returned to the controller, a cancel signal is propagated back through the stream by Spring WebFlux and, thus, no information is logged about a failure.
Here's one attempt:
public Flux<Entity> createCustomer(final Customer customer) {
var cacheCreate = cacheClient
.createCustomer(customer)
.doOnError(WebClientResponseException.class,
err -> log.error("Customer creation failed in cache"));
var masterCreate = masterClient
.createCustomer(customer)
.doOnError(WebClientResponseException.class,
err -> log.error("Customer creation failed in master"));
return Flux.firstWithValue(cacheCreate, masterCreate)
.onErrorMap((err) -> new Exception("Customer creation failed in cache and master"));
}
Flux.firstWithValue() is great for emitting the first non-error value, but then whichever source is lagging behind is cancelled, meaning that any error is never logged out. I've also tried scheduling these two sources on their own Schedulers and that didn't seem to help either.
How can I perform these two calls asynchronously, and emit the first value to the caller, while continuing to listen for emissions on the slower source?
You can achieve that by transforming you operators to "hot" publishers using share() operator:
First subscriber launch the upstream operator, and additional subscribers get back result cached from the first subscriber:
Further Subscriber will share [...] the same result.
Once a second subscription has been done, the publisher is not cancellable:
It's worth noting this is an un-cancellable Subscription.
So, to achieve your requirement:
Apply share() on each of your operators
Launch a subscription on shared publishers to trigger processing
Use shared operators in your pipeline (here firstWithValue).
Sample example:
import java.time.Duration;
import reactor.core.publisher.Mono;
public class TestUncancellableMono {
// Mock a mono successing quickly
static Mono<String> quickSuccess() {
return Mono.delay(Duration.ofMillis(200)).thenReturn("SUCCESS !");
}
// Mock a mono taking more time and ending in error.
static Mono<String> longError() {
return Mono.delay(Duration.ofSeconds(1))
.<String>then(Mono.error(new Exception("ERROR !")))
.doOnCancel(() -> System.out.println("CANCELLED"))
.doOnError(err -> System.out.println(err.getMessage()));
}
public static void main(String[] args) throws Exception {
// Transform to hot publisher
var sharedQuick = quickSuccess().share();
var sharedLong = longError().share();
// Trigger launch
sharedQuick.subscribe();
sharedLong.subscribe();
// Subscribe back to get the cached result
Mono
.firstWithValue(sharedQuick, sharedLong)
.subscribe(System.out::println, err -> System.out.println(err.getMessage()));
// Wait for subscription to end.
Thread.sleep(2000);
}
}
The output of the sample is:
SUCCESS !
ERROR !
We can see that error message has been propagated properly, and that upstream publisher has not been cancelled.

.transform / .compose duplicates Mono execution with Spring Security

While implementing an authentication solution based on Spring Security Reactive, I faced an issue where the operations in the chain get duplicated at some point. From that, everything was called twice.
The culprit was the operator .transform at one point of the chain. After editing the called method and replacing the operator by .flatMap, the issue was resolved and everything was only called once.
The question
According to the operator's documentation, the
function is applied to an original operator chain at assembly time to augment it with the encapsulated operators
and
is basically equivalent to chaining the operators directly.
Why did the operator .transform trigger a second subscription to the chain, then ?
The context
This authentication flow takes a trusted username and retrieves its details from a webservice.
The authentication method to implement the ReactiveAuthenticationManager :
#Override
public Mono<Authentication> authenticate(Authentication providedAuthentication) {
String username = (String) providedAuthentication.getPrincipal();
String token = (String) providedAuthentication.getCredentials();
return Mono.just(providedAuthentication)
.doOnNext(x -> LOGGER.debug("Starting authentication of user {}", x))
.doOnNext(AuthenticationValidator.validateProvided)
.then(ReactiveSecurityContextHolder.getContext())
.map(SecurityContext::getAuthentication)
.flatMap(auth -> AuthenticationValidator.validateCoherence(auth, providedAuthentication))
.switchIfEmpty(Mono.defer(() -> {
LOGGER.trace("Switch if empty before retrieving user");
return retrieveUser(username, token);
}))
.doOnNext(logAccess);
}
The duplication of the calls started from the supplier of .switchIfEmpty until the end of the chain.
The method creating the Mono used by .switchIfEmpty :
private Mono<PreAuthenticatedAuthenticationToken> retrieveUser(String username, String token) {
return Mono.just(username)
.doOnNext(x -> LOGGER.trace("Before find by username"))
.then(habileUserDetails.findByUsername(username, token))
.cast(XXXUserDetails.class)
.transform(rolesProvider::provideFor)
.map(user -> new PreAuthenticatedAuthenticationToken(user, GlobalConfiguration.NO_CREDENTIAL, user.getAuthorities()))
.doOnNext(s -> LOGGER.debug("User data retrieved from XXX"));
}
The operator .transform on line 4 has been replaced by .flatMap to resolve the issue.
The original method called by the .transform operator :
public Mono<CompleteXXXUserDetails> provideFor(Mono<XXXUserDetails> user) {
return user
.map(XXXUserDetails::getAuthorities)
.map(l -> StreamHelper.transform(l, GrantedAuthority::getAuthority))
.map(matcher::match)
.map(enricher::enrich)
.map(l -> StreamHelper.transform(l, SimpleGrantedAuthority::new))
.zipWith(user, (authorities, userDetails)
-> CompleteXXXUserDetails.from(userDetails).withAllAuthorities(authorities));
}
Here is a trace of the execution :
DEBUG 20732 --- [ctor-http-nio-3] c.a.s.s.h.a.XXXAuthenticationManager : Starting authentication of user [REDACTED]
TRACE 20732 --- [ctor-http-nio-3] c.a.s.s.h.a.XXXAuthenticationManager : Switch if empty before retrieving user
TRACE 20732 --- [ctor-http-nio-3] c.a.s.s.h.a.XXXAuthenticationManager : Before find by username
TRACE 20732 --- [ctor-http-nio-3] c.a.s.s.xxx.user.UserRetriever : Between request and call
TRACE 20732 --- [ctor-http-nio-3] c.a.s.s.h.u.retriever.UserRetrieverV01: Calling webservice v01
TRACE 20732 --- [ctor-http-nio-3] c.a.s.s.h.a.XXXAuthenticationManager : Before find by username
TRACE 20732 --- [ctor-http-nio-3] c.a.s.s.xxx.user.UserRetriever : Between request and call
TRACE 20732 --- [ctor-http-nio-3] c.a.s.s.h.u.retriever.UserRetrieverV01: Calling webservice v01
For information, I'm using Spring Boot 2.1.2.RELEASE.
This answer doesn't address the root cause but rather explains how a transform could be applied several times when subscribed to several times, which is not the case in OP's issue. Edited the original text into a quote.
That statement is only valid when the transform is applied as a top-level operator in the chain you subscribe to. Here you are applying it within retrieveUser, which is invoked inside a Mono.defer (which goal is to execute that code for each different subscription).
(edit:) so if that defer is subscribed to x times, the transform Function will be applied x times as well.
compose is basically transform-inside-a-defer by the way.
The issue is in the fact that you do a user.whatever(...).zipWith(user, ...).
With transform, this translates to:
Mono<XXXUserDetails> user = Mono.just(username)
.doOnNext(x -> LOGGER.trace("Before find by username"))
.then(habileUserDetails.findByUsername(username, token))
.cast(XXXUserDetails.class);
return user.wathewer(...)
.zipWith(user, ...);
Whereas with flatMap I assume you did something to the effect of flatMap(u -> provideFor(Mono.just(u))? If so, that would translate to:
Mono<XXXUserDetails> user = Mono.just(username)
.doOnNext(x -> LOGGER.trace("Before find by username"))
.then(habileUserDetails.findByUsername(username, token))
.cast(XXXUserDetails.class);
return user.flatMap(u -> {
Mono<XXXUserDetails> capture = Mono.just(u);
return capture.whatever(...)
.zipWith(capture, ...);
}
You can see how both subscribe twice to a Mono<XXXUserDetails due to zipWith.
The reason is seems to subscribe once with flatMap is because it captures the output of the upstream pipeline and applies the provideFor function on that capture. The capture (Mono.just(u)) is subscribed twice but acts as a cache and doesn't bear any logic / logs / etc...
With transform, there is no capture. The provideFor function is applied directly to the upstream pipeline, which makes the fact that it subscribes twice quite visible.

Performing actor lookup with Akka actorFor

I have the following Akka actor:
public class MyActor extends AbstractActor {
protected Logger log = LoggerFactory.getLogger(this.getClass());
#Override
public Receive createReceive() {
return receiveBuilder()
.matchAny(message -> {
String myFullName = self().path().toString();
String myName = self().path().name();
ActorRef reincarnatedMe = context().actorFor(self().path().name());
String reincarnatedFullName = reincarnatedMe.path().toString();
String reincarnatedName = reincarnatedMe.path().name();
log.info("myFullName: {}", myFullName);
log.info("myName: {}", myName);
log.info("reincarnatedFullName: {}", reincarnatedFullName);
log.info("reincarnatedName: {}", reincarnatedName);
}).build();
}
}
At runtime it produces this output:
05:43:14.617 [MySystem-akka.actor.default-dispatcher-4] INFO myapp.actors.MyActor - myFullName: akka://MySystem/user/MyActor
05:43:14.623 [MySystem-akka.actor.default-dispatcher-4] INFO myapp.actors.MyActor - myName: MyActor
05:43:14.623 [MySystem-akka.actor.default-dispatcher-4] INFO myapp.actors.MyActor - reincarnatedFullName: akka://MySystem/user/MyActor/MyActor
05:43:14.623 [MySystem-akka.actor.default-dispatcher-4] INFO myapp.actors.MyActor - reincarnatedName: MyActor
My understanding was that context().actorFor(...) doesn't create a new actor, rather it finds an existing actor that matches the path/string you provide and returns a reference to it.
However, it appears that in my code above, self() becomes the parent of reincarnatedMe as evidenced by myFullName simply being "MySystem/user/MyActor" whereas reincarnatedFullName is "MySystem/user/MyActor/MyActor"...
Am I reading this right? If so, how can I invoke context().actorFor(...) (or any other method for that matter) such that myFullName becomes the same as reincarnatedFullName (so that self() and reincarnatedMe reference the same actor? And if I'm not reading this right, why is myFullName different than reincarnatedFullName?
Update:
public class AnotherActor extends AbstractActor { ... }
// Inside MyActor#createReceive:
ActorSelection anotherActorSel = context().actorSelection("AnotherActor");
anotherActorSel.tell(new SomeMessage(), self());
First, ActorContext.actorFor(String) is deprecated in favor of ActorContext.actorSelection(String). This method returns an ActorSelection, but you can still send a message to an ActorSelection (such as an Identify, which response with an ActorIdentity message automatically).
The documentation for the actorFor method says that, "Absolute URIs like akka://appname/user/actorA are looked up as described for look-ups by actorOf(ActorPath)." I can't find documentation on an actorOf(ActorPath) method, but the other actorOf methods state they create new actors, so I suspect this does the same. The behavior you've found is likely the reason for the deprecation -- or because it was deprecated and the methods used for something else.

Getting reference to context in Spring Reactor

I am using Spring projectreactor reactor-core 3.1.8.RELEASE. I am implementing a logging framework for my microservice to have JSON Audit logs, so used context to store certain fields such as userID, collaboration ID, component Name and few other fields that are common across request life-cycle. Since Threadlocal cannot be used in reactive services to stores these elements, I have to use the context. But getting a reference to the context is apparently very difficult. I can get a reference to the context from the Signal through the doOnEach function call and that's it. If I use doOnEach, it gets called for all signals types and I am unable to isolate on Error, success and so on. Moreover, if an error occurs in between, then all the subsequent doOnEach gets called anyway, so the logs get repeated with several onError log types.
There is very limited documentation regarding how to get a reference to the context object in Spring reactor. Any help regarding a better way to generate audit logs that has collaboration IDs and other request specific IDs stored and propagated across function calls and external invocations is appreciated.
Code Snippets -
In the WebFilter, I am setting few key-value pairs as follows -
override fun filter(exchange: ServerWebExchange, filterChain: WebFilterChain): Mono<Void> {
// add the context variables at the end of the chain as the context moves from
// downstream to upstream.
return filterChain.filter(exchange)
.subscriberContext { context ->
var ctx = context.put(RestRequestInfo::class.java, restRequestInfo(exchange))
ctx = ctx.put(COLLABORATION_ID, UUID.randomUUID().toString())
ctx=ctx.put(COMPONENT_NAME, "sample-component-name")
ctx=ctx.put(USER_NAME, "POSTMAN")
ctx
}
}
Then I want to use the key-value pairs added above in all the subsequent logs so that log aggregators like Splunk can get all the JSON logs associated with this particular request, based on collaboration ID. Right now, the only way to get values out of context is through doOnEach function call, where we get a handle to the SIgnal through which we get handle to context. But all doOnEach gets called during each and every events, irrespective of whether each function call was success or failure
return Mono.just(request)
.doOnEach(**Code to log with context data**)
.map(RequestValidations::validateRequest)
.doOnEach(**Code to log with context data**)
.map(RequestValidations::buildRequest)
.map(RequestValidations::validateQueryParameters)
.doOnEach(**Code to log with context data**)
.flatMap(coverageSummariesGateway::getCoverageSummaries)
.doOnEach(**Code to log with context data**)
.map({ coverageSummaries ->
getCoverageSummariesResponse(coverageSummaries, serviceReferenceId) })
.doOnEach(**Code to log with context data**)
.flatMap(this::renderSuccess)
.doOnEach(**Code to log with context data**)
.doOnError { logger.info("ERROR OCCURRED") }
Thank you!
You could do something like following:
return Mono.just(request)
.doOnEach(**Code to log with context data**)
.flatMap( r -> withMDC(r, RequestValidations::validateRequest))
following method will populate mapping diagnostic context (MDC) so you have it automatically in your logs (depends on you logging pattern). E.g. logback has %X{traceId} where traceId is a key in the tracingContext map.
public static <T, R> Mono<R> withMDC(T value, Function<T, Mono<R>> f) {
return Mono.subscriberContext()
.flatMap( ctx -> {
Optional<Map> tracingContext = ctx.getOrEmpty("tracing-context-key");
if (tracingContext.isPresent()) {
try {
MDC.setContextMap(tracingContext.get());
return f.apply(value);
} finally {
MDC.clear();
}
} else{
return f.apply(value);
}
});
}
It is not quite nice, hope it will be eventually improved by Logging frameworks and context will be injected automatically.

Is there any way I can log the currently executing advice at a Joint Point?

I have a scenario where I have multiple advice being applied on a single joint point because of the common point cut expression. Is there any way through which I can log which advice is currently executing or being executed without giving them different log statements(by means of some method invocations)?
/*
* Advice to check Asset Service 1 response
*/
#Around(value="#annotation(vs2a) && args(mfa)")
public MessageFlowAggregator checkAsset1Response(ProceedingJoinPoint joinPoint,ValidateStage2Advice vs2a,MessageFlowAggregator mfa) throws Throwable {
log.debug(">>> matching advice on {}",joinPoint);
if(mfa!=null){
mfa= (MessageFlowAggregator) joinPoint.proceed();
log.debug("<<< returning advice on {}",joinPoint);
}else{
log.debug("<<< failing advice on {}",joinPoint);
}
return mfa;
}
/*
* Advice to check Customer Service 2 response
*/
#Around(value="#annotation(vs2a) && args(mfa)")
public MessageFlowAggregator checkCustomer2Response(ProceedingJoinPoint joinPoint,ValidateStage2Advice vs2a,MessageFlowAggregator mfa) throws Throwable {
log.debug(">>> matching advice on {}",joinPoint);
if(mfa!=null){
mfa= (MessageFlowAggregator) joinPoint.proceed();
log.debug("<<< returning advice on {}",joinPoint);
}else{
log.debug("<<< failing advice on {}",joinPoint);
}
return mfa;
}
Both the above advice print s same log statement and I am not able to differentiate between them.
Thanks in advance !
Well, you could just add a prefix to each log message. The alternative would be:
log.debug("Executing advice: " + new Exception().getStackTrace()[0])
And of course if you print the same thing twice from within one advice, you should store it in a variable first so as not to make creating exception objects and stacktraces even more expensive. But I think for debugging it should be okay.
I think it's better that we use logging frameworks logging pattern to get the method name in every log printed. I update the logging pattern to get the desired output in log.
Pattern:
logging.pattern.console=%d %-5level %t %logger{2}.%M : %msg%n
logs printed as:
2018-02-01 19:17:39,798 INFO main o.s.w.s.s.SaajSoapMessageFactory.afterPropertiesSet : Creating SAAJ 1.3 MessageFactory with SOAP 1.1 Protocol
2018-02-01 19:17:39,798 INFO main o.s.w.s.s.SaajSoapMessageFactory.afterPropertiesSet : Creating SAAJ 1.3 MessageFactory with SOAP 1.1 Protocol
2018-02-01 19:17:39,798 INFO main o.s.w.s.s.SaajSoapMessageFactory.afterPropertiesSet : Creating SAAJ 1.3 MessageFactory with SOAP 1.1 Protocol
2018-02-01 19:17:39,798 INFO main o.s.w.s.s.SaajSoapMessageFactory.afterPropertiesSet : Creating SAAJ 1.3 MessageFactory with SOAP 1.1 Protocol

Resources