Spring Webflux - lazily initialized in-memory cache - spring

can some one help me with a correct pattern for stateful service in SpringWebflux? I have a REST service which communicates with an external API and needs to fetch auth token from that API during the first call and cache it to reuse in all next calls. Currently I'm having a code which works, but concurrent calls cause multiple token requests. Is there a way to handle concurrency?
#Service
#RequiredArgsConstructor
public class ExternalTokenRepository {
private final WebClient webClient;
private Object cachedToken = null;
public Mono<Object> getToken() {
if (cachedToken != null) {
return Mono.just(cachedToken);
} else {
return webClient.post()
//...
.exchangeToMono(response -> {
//...
return response.bodyToMono(Object.class)
})
.doOnNext(token -> cachedToken = token)
}
}
}
UPDATED: Token I receive have some expiration and I need to refresh it after some time. Refresh request should be call only once too.

You can initialize Mono in the constructor and use cache operator:
#Service
public class ExternalTokenRepository {
private final Mono<Object> cachedToken;
public ExternalTokenRepository(WebClient webClient) {
this.cachedToken = webClient.post()
//...
.exchangeToMono(response -> {
//...
return response.bodyToMono(Object.class);
})
.cache(); // this is the important part
}
public Mono<Object> getToken() {
return cachedToken;
}
}
UPDATE: cache operator also supports TTL based on the return value: https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html#cache-java.util.function.Function-java.util.function.Function-java.util.function.Supplier-

Related

Configuring AWS Signing in Reactive Elasticsearch Configuration

In one of our service I tried to configure AWS signing in Spring data Reactive Elasticsearch configuration.
Spring provides the configuring the webclient through webclientClientConfigurer
ClientConfiguration clientConfiguration = ClientConfiguration.builder()
.connectedTo("localhost:9200")
.usingSsl()
.withWebClientConfigurer(
webClient -> {
return webClient.mutate().filter(new AwsSigningInterceptor()).build();
})
. // ... other options to configure if required
.build();
through which we can configure to sign the requests but however AWS signing it requires url, queryparams, headers and request body(in case of POST,POST) to generate the signed headers.
Using this I created a simple exchange filter function to sign the request but in this function I was not able to access the request body and use it.
Below is the Filter function i was trying to use
#Component
public class AwsSigningInterceptor implements ExchangeFilterFunction
{
private final AwsHeaderSigner awsHeaderSigner;
public AwsSigningInterceptor(AwsHeaderSigner awsHeaderSigner)
{
this.awsHeaderSigner = awsHeaderSigner;
}
#Override
public Mono<ClientResponse> filter(ClientRequest request, ExchangeFunction next)
{
Map<String, List<String>> signingHeaders = awsHeaderSigner.createSigningHeaders(request, new byte[]{}, "es", "us-west-2"); // should pass request body bytes in place of new byte[]{}
ClientRequest.Builder requestBuilder = ClientRequest.from(request);
signingHeaders.forEach((key, value) -> requestBuilder.header(key, value.toArray(new String[0])));
return next.exchange(requestBuilder.build());
}
}
I also tried to access the request body inside ExchangeFilterFunction using below approach but once i get the request body using below approach.
ClientRequest.from(newRequest.build())
.body(
(outputMessage, context) -> {
ClientHttpRequestDecorator loggingOutputMessage =
new ClientHttpRequestDecorator(outputMessage) {
#Override
public Mono<Void> writeWith(Publisher<? extends DataBuffer> body) {
log.info("Inside write with method");
body =
DataBufferUtils.join(body)
.map(
content -> {
// Log request body using
// 'content.toString(StandardCharsets.UTF_8)'
String requestBody =
content.toString(StandardCharsets.UTF_8);
Map<String, Object> signedHeaders =
awsSigner.getSignedHeaders(
request.url().getPath(),
request.method().name(),
multimap,
requestHeadersMap,
Optional.of(
requestBody.getBytes(StandardCharsets.UTF_8)));
log.info("Signed Headers generated:{}", signedHeaders);
signedHeaders.forEach(
(key, value) -> {
newRequest.header(key, value.toString());
});
return content;
});
log.info("Before returning the body");
return super.writeWith(body);
}
#Override
public Mono<Void>
setComplete() { // This is for requests with no body (e.g. GET).
Map<String, Object> signedHeaders =
awsSigner.getSignedHeaders(
request.url().getPath(),
request.method().name(),
multimap,
requestHeadersMap,
Optional.of("".getBytes(StandardCharsets.UTF_8)));
log.info("Signed Headers generated:{}", signedHeaders);
signedHeaders.forEach(
(key, value) -> {
newRequest.header(key, value.toString());
});
return super.setComplete();
}
};
return originalBodyInserter.insert(loggingOutputMessage, context);
})
.build();
But with above approach I was not able to change the request headers as adding headers throws UnsupportedOperationException inside writewith method.
Has anyone used the spring data reactive elastic search and configured to sign with AWS signed headers?
Any help would be highly appreciated.

How to properly get InputStreamResource to ResponseEntity in Webflux?

I have a method that fetches a PDF from another web service, but we have to fetch a cross reference ID before we can make the call:
PdfService.groovy
#Service
class PdfService {
#Autowired
WebClient webClient
#Autowired
CrossRefService crossRefService
InputStreamResource getPdf(String userId, String pdfId) {
def pdf = return crossRefService
.getCrossRefId(userId)
.flatMapMany(crossRefResponse -> {
return webClient
.get()
.uri("https://some-url/${pdfId}.pdf", {
it.queryParam("crossRefId", crossRefResponse.id)
it.build(pdfId)
})
.accept(MediaType.APPLICATION_PDF)
.retrieve()
.bodyToFlux(DataBuffer)
})
convertDataBufferToInputStreamResource(pdf)
}
// https://manhtai.github.io/posts/flux-databuffer-to-inputstream/
InputStreamResource getInputStreamFromFluxDataBuffer(Flux<DataBuffer> data) throws IOException {
PipedOutputStream osPipe = new PipedOutputStream();
PipedInputStream isPipe = new PipedInputStream(osPipe);
DataBufferUtils.write(data, osPipe)
.subscribeOn(Schedulers.elastic())
.doOnComplete(() -> {
try {
osPipe.close();
} catch (IOException ignored) {
}
})
.subscribe(DataBufferUtils.releaseConsumer());
new InputStreamResource(isPipe);
}
}
PdfController.groovy
#RestController
#RequestMapping("/pdf/{pdfId}.pdf")
class PdfController {
#Autowired
PdfService service
#GetMapping
Mono<ResponseEntity<Resource>> getPdf(#AuthenticationPrincipal Jwt jwt, #PathVariable String pdfId) {
def pdf = service.getPdf(jwt.claims.userId, pdfId)
Mono.just(ResponseEntity.ok().body(pdf))
}
}
When I run my service class as an integration test, everything work fine, and the InputStreamResource has a content length of 240,000. However, when I try to make this same call from the controller, it seems as if the internal cross reference call is never made.
What is the correct way to place an InputStreamResource into a Publisher? Or is it even needed?

spring boot web client and writing pact contracts

So im trying to figure out how to write consumer contracts for the following class. I have written junit tests fine using mockwebserver.
However for pact testing im struggling and cant seem to see how you get the weblient to use the response from server, all the examples tend to be for resttemplate.
public class OrdersGateway {
public static final String PATH = "/orders";
private final WebClient webClient;
#Autowired
public OrdersGateway(String baseURL) {
this.webClient = WebClient.builder()
.baseUrl(baseURL)
.defaultHeader(HttpHeaders.ACCEPT, MediaType.ALL_VALUE)
.defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.build();
}
#Override
public Orderresponse findOrders() {
return this.webClient
.post()
.uri(PATH)
.httpRequest(httpRequest -> {
HttpClientRequest reactorRequest = httpRequest.getNativeRequest();
reactorRequest.responseTimeout(Duration.ofSeconds(4));
})
.exchangeToMono(response())
.block();
}
private Function<ClientResponse, Mono<OrderResponse>> response() {
return result -> {
if (result.statusCode().equals(HttpStatus.OK)) {
return result.bodyToMono(OrderResponse.class);
} else {
String exception = String.format("error", result.statusCode());
return Mono.error(new IllegalStateException(exception));
}
};
}
}
Its the #test method for verification, im not sure how to create that. I cant see how the pact-mock-server can intercept the webcleint call.
There might be an assumption that Pact automatically intercepts requests - this is not the case.
So when you write a Pact unit test, you need to explicitly configure your API client to communicate to the Pact mock service, not the real thing.
Using this example as a basis, your test might look like this:
#ExtendWith(PactConsumerTestExt.class)
#PactTestFor(providerName = "orders-gateway")
public class OrdersPactTest {
#Pact(consumer="orders-provider")
public RequestResponsePact findOrders(PactDslWithProvider builder) {
PactDslJsonBody body = PactDslJsonArray.arrayEachLike()
.uuid("id", "5cc989d0-d800-434c-b4bb-b1268499e850")
.stringType("status", "STATUS")
.decimalType("amount", 100.0)
.closeObject();
return builder
.given("orders exist")
.uponReceiving("a request to find orders")
.path("/orders")
.method("GET")
.willRespondWith()
.status(200)
.body(body)
.toPact();
}
#PactTestFor(pactMethod = "findOrders")
#Test
public void findOrders(MockServer mockServer) throws IOException {
OrdersGateway orders = new OrdersGateway(mockServer.getUrl()).findOrders();
// do some assertions
}
}

Return response messages in spring boot

I am working with spring boot with a h2 database. I would like to return a 201 message when the register is inserted succesfully and a 400 when is duplicated. I am using ResponseEntity to achieve this, fot example , the next is my create method from the Service:
#Override
public ResponseEntity<Object> createEvent(EventDTO eventDTO) {
if (eventRepository.findOne(eventDTO.getId()) != null) {
//THis is a test, I am looking for the correct message
return new ResponseEntity(HttpStatus.IM_USED);
}
Actor actor = actorService.createActor(eventDTO.getActor());
Repo repo = repoService.createRepo(eventDTO.getRepo());
Event event = new Event(eventDTO.getId(), eventDTO.getType(), actor, repo, createdAt(eventDTO));
eventRepository.save(event);
return new ResponseEntity(HttpStatus.CREATED);
}
This is my controller:
#PostMapping(value = "/events")
public ResponseEntity addEvent(#RequestBody EventDTO body) {
return eventService.createEvent(body);
}
But I'm not getting any message in the browser, I am doing different tests with postman and when I consult for all the events, the result is correct, but each time that I make a post I dont get any message in the browser, I am not pretty sure what is the cause of this issue. Any ideas?
The ideal way to send Response to the client is to create DTO/DAO with ResponseEntity in Controller
Controller.java
#PostMapping("/test")
public ResponseEntity<Object> testApi(#RequestBody User user)
{
System.out.println("User: "+user.toString());
return assetService.testApi(user);
}
Service.java
public ResponseEntity testApi(User user) {
if(user.getId()==1)
return new ResponseEntity("Created",HttpStatus.CREATED);
else
return new ResponseEntity("Used",HttpStatus.IM_USED);
// for BAD_REQUEST(400) return new ResponseEntity("Bad Request",HttpStatus.BAD_REQUEST);
}
Tested using Postman
Status 201 Created
Status 226 IM Used
Okay, I really don't feel good that service sending the ResponseEntity but not Controller.You could use #ResponseStatus and ExceptionHandler classes for these cases, like below.
Create a class in exception package
GlobalExceptionHandler.java
#ControllerAdvice
public class GlobalExceptionHandler {
#ResponseStatus(HttpStatus.BAD_REQUEST)
#ExceptionHandler(DataIntegrityViolationException.class) // NOTE : You could create a custom exception class to handle duplications
public void handleConflict() {
}
}
Controller.java
#PostMapping(value = "/events")
#ResponseStatus(HttpStatus.CREATED) // You don't have to return any object this will take care of the status
public void addEvent(#RequestBody EventDTO body) {
eventService.createEvent(body);
}
Now changing the service would look like,
Service.java
#Override
public void createEvent(EventDTO eventDTO) { // No need to return
if (eventRepository.findOne(eventDTO.getId()) != null) {
throw new DataIntegrityViolationException("Already exists"); // you have to throw the same exception which you have marked in Handler class
}
Actor actor = actorService.createActor(eventDTO.getActor());
Repo repo = repoService.createRepo(eventDTO.getRepo());
Event event = new Event(eventDTO.getId(), eventDTO.getType(), actor, repo, createdAt(eventDTO));
eventRepository.save(event);
}

Can I use Spring WebFlux to implement REST services which get data through Kafka request/response topics?

I'm developing REST service which, in turn, will query slow legacy system so response time will be measured in seconds. We also expect massive load so I was thinking about asynchronous/non-blocking approaches to avoid hundreds of "servlet" threads blocked on calls to slow system.
As I see this can be implemented using AsyncContext which is present in new servlet API specs. I even developed small prototype and it seems to be working.
On the other hand it looks like I can achieve the same using Spring WebFlux.
Unfortunately I did not find any example where custom "backend" calls are wrapped with Mono/Flux. Most of the examples just reuse already-prepared reactive connectors, like ReactiveCassandraOperations.java, etc.
My data flow is the following:
JS client --> Spring RestController --> send request to Kafka topic --> read response from Kafka reply topic --> return data to client
Can I wrap Kafka steps into Mono/Flux and how to do this?
How my RestController method should look like?
Here is my simple implementation which achieves the same using Servlet 3.1 API
//took the idea from some Jetty examples
public class AsyncRestServlet extends HttpServlet {
...
#Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
String result = (String) req.getAttribute(RESULTS_ATTR);
if (result == null) { //data not ready yet: schedule async processing
final AsyncContext async = req.startAsync();
//generate some unique request ID
String uid = "req-" + String.valueOf(req.hashCode());
//share it to Kafka receive together with AsyncContext
//when Kafka receiver will get the response it will put it in Servlet request attribute and call async.dispatch()
//This doGet() method will be called again and it will send the response to client
receiver.rememberKey(uid, async);
//send request to Kafka
sender.send(uid, param);
//data is not ready yet so we are releasing Servlet thread
return;
}
//return result as html response
resp.setContentType("text/html");
PrintWriter out = resp.getWriter();
out.println(result);
out.close();
}
Here's a short example - Not the WebFlux client you probably had in mind, but at least it would enable you to utilize Flux and Mono for asynchronous processing, which I interpreted to be the point of your question. The web objects should work without additional configurations, but of course you will need to configure Kafka as the KafkaTemplate object will not work on its own.
#Bean // Using org.springframework.web.reactive.function.server.RouterFunction<ServerResponse>
public RouterFunction<ServerResponse> sendMessageToTopic(KafkaController kafkaController){
return RouterFunctions.route(RequestPredicates.POST("/endpoint"), kafkaController::sendMessage);
}
#Component
public class ResponseHandler {
public getServerResponse() {
return ServerResponse.ok().body(Mono.just(Status.SUCCESS), String.class);
}
}
#Component
public class KafkaController {
public Mono<ServerResponse> auditInvalidTransaction(ServerRequest request) {
return request.bodyToMono(TopicMsgMap.class)
// your HTTP call may not return immediately without this
.subscribeOn(Schedulers.single()) // for a single worker thread
.flatMap(topicMsgMap -> {
MyKafkaPublisher.sendMessages(topicMsgMap);
}.flatMap(responseHandler::getServerResponse);
}
}
#Data // model class just to easily convert the ServerRequest (from json, for ex.)
// + ~#constructors
public class TopicMsgMap() {
private Map<String, String> topicMsgMap;
}
#Service // Using org.springframework.kafka.core.KafkaTemplate<String, String>
public class MyKafkaPublisher {
#Autowired
private KafkaTemplate<String, String> template;
#Value("${topic1}")
private String topic1;
#Value("${topic2}")
private String topic2;
public void sendMessages(Map<String, String> topicMsgMap){
topicMsgMap.forEach((top, msg) -> {
if (topic.equals("topic1") kafkaTemplate.send(topic1, message);
if (topic.equals("topic2") kafkaTemplate.send(topic2, message);
});
}
}
Guessing this isn't the use-case you had in mind, but hope you find this general structure useful.
There is several approaches including KafkaReplyingRestTemplate for this problem but continuing your approach in servlet api's the solution will be something like this in spring Webflux.
Your Controller method looks like this:
#RequestMapping(path = "/completable-future", method = RequestMethod.POST)
Mono<Response> asyncTransaction(#RequestBody RequestDto requestDto, #RequestHeader Map<String, String> requestHeaders) {
String internalTransactionId = UUID.randomUUID().toString();
kafkaSender.send(Request.builder()
.transactionId(requestHeaders.get("transactionId"))
.internalTransactionId(internalTransactionId)
.sourceIban(requestDto.getSourceIban())
.destIban(requestDto.getDestIban())
.build());
CompletableFuture<Response> completableFuture = new CompletableFuture();
taskHolder.pushTask(completableFuture, internalTransactionId);
return Mono.fromFuture(completableFuture);
}
Your taskHolder component will be something like this:
#Component
public class TaskHolder {
private Map<String, CompletableFuture> taskHolder = new ConcurrentHashMap();
public void pushTask(CompletableFuture<Response> task, String transactionId) {
this.taskHolder.put(transactionId, task);
}
public Optional<CompletableFuture> remove(String transactionId) {
return Optional.ofNullable(this.taskHolder.remove(transactionId));
}
}
And finally your Kafka ResponseListener looks like this:
#Component
public class ResponseListener {
#Autowired
TaskHolder taskHolder;
#KafkaListener(topics = "reactive-response-topic", groupId = "test")
public void listen(Response response) {
taskHolder.remove(response.getInternalTransactionId()).orElse(
new CompletableFuture()).complete(response);
}
}
In this example I used internalTransactionId as CorrelationId but you can use "kafka_correlationId" that is a known kafka header.

Resources