I have a Springboot API that makes a maximum of 6 Stored Procedure calls using the callable statement. I want to make this call asynchronous. Is it possible to achieve this using CompleteableFuture(java8)???
Database connections are typically not thread-safe. Are you planning to use one connection per call?
If yes, following code will execute the callable statements in parallel. Please note I have used vavr library to simplify the exception handling.
public List<Boolean> concurrentCalls(Supplier<Connection> connectionSupplier, List<String> statements) {
Function<String, Either<Throwable, Boolean>> executeStatement = statement ->
Try.of(() -> connectionSupplier.get()
.prepareCall(statement)
.execute())
.toEither();
List<CompletableFuture<Boolean>> completableFutures = statements.stream()
.map(statement ->
CompletableFuture.supplyAsync(() -> executeStatement.apply(statement))
.thenApply( Either::get) // Handle exceptions as required
)
.collect(Collectors.toList());
return CompletableFuture.allOf( completableFutures.toArray( new CompletableFuture[0]))
.thenApply( any ->
completableFutures
.stream()
.map(CompletableFuture::join)
.collect(Collectors.toList())
)
.join();
}
Related
We are using spring webflux (project reactor), as part of the requirement we need to call one API from our server.
For the API call, we need to cache the response. So we are using Mono.cache operator.
It caches the response Mono<ResponseDto> and the next time the same API call happens, it will get it from the cache. Following is example implementation
public Mono<ResponseDto> getResponse() {
if (res == null) {
res =
fetchResponse()
.onErrorMap(Exception.class, (error) -> new CustomException())
.cache(
r -> Duration.ofSeconds(r.expiresIn()),
error -> Duration.ZERO,
() -> Duration.ZERO);
}
return res;
}
The problem is if the server calls the same API call twice ( for example Mono.zip) at the same time, then the response is not cached and we actually call it twice.
Is there any out of box solution available to this problem? Instead of caching the Response, can we cache the Mono itself so that both requests subscribe to the same Mono hence both are executed after a Single API call response?
It should also work with sequential execution too - I am afraid that if we cache the Mono then once the request is completed, the subscription is over and no other process can subscribe to it.
Project Reactor provides a cache utility CacheMono that is non-blocking but can stampede.
AsyncCache will be better integration, for the first lookup with key "K" will result in a cache miss, it will return a CompletableFuture of the API call and for the second lookup with the same key "K" will get the same CompletableFuture object.
The returned future object can be converted to/from Mono with Mono.fromFuture()
public Mono<ResponseData> lookupAndWrite(AsyncCache<String, ResponseData> cache, String key) {
return Mono.defer(
() ->
Mono.fromFuture(
cache.get(
key,
(searchKey, executor) -> {
CompletableFuture<ResponseData> future = callAPI(searchKey).toFuture();
return future.whenComplete(
(r, t) -> {
if (t != null) {
cache.synchronous().invalidate(key);
}
});
})));}
You can initialize the Mono in the constructor (assuming it doesn't depend on any request time parameter). Using cache operator will prevent multiple subscriptions to the source.
class MyService {
private final Mono<ResponseBodyDto> response;
public MyService() {
response = fetchResponse()
.onErrorMap(Exception.class, (error) -> new CustomException())
.cache(
r -> Duration.ofSeconds(r.expiresIn()),
error -> Duration.ZERO,
() -> Duration.ZERO);
}
public Mono<ResponseDto> getResponse() {
return response;
}
}
If there is a dependency on request time parameters, you should consider some custom caching solution.
You could use CacheMono from io.projectreactor.addons:reactor-extra to wrap non-reactive cache implementation like Guava Cache or simple ConcurrentHashMap. It doesn't provide an "exactly-once" guarantee and parallel requests could result in cache misses, but in many scenarios, it should not be an issue.
Here is an example with Guava Cache
public class GlobalSettingsCache {
private final GlobalSettingsClient globalSettingsClient;
private final Cache<String, GlobalSettings> cache;
public GlobalSettingsCache(GlobalSettingsClient globalSettingsClient, Duration cacheTtl) {
this.globalSettingsClient = globalSettingsClient;
this.cache = CacheBuilder.newBuilder()
.expireAfterWrite(cacheTtl)
.build();
}
public Mono<GlobalSettings> get(String tenant) {
return CacheMono.lookup(key -> Mono.justOrEmpty(cache.getIfPresent(key)).map(Signal::next), tenant)
.onCacheMissResume(() -> fetchGlobalSettings(tenant))
.andWriteWith((key, signal) -> Mono.fromRunnable(() ->
Optional.ofNullable(signal.get())
.ifPresent(value -> cache.put(key, value))));
}
private Mono<GlobalSettings> fetchGlobalSettings(String tenant) {
return globalSettingsClient.getGlobalSettings(tenant);
}
}
I have created a Mono with .fromCallable() in Java spring-reactor. I thought it will run the lambda I provided asynchronously and use Mono.empty() as the return value. So, the execution of the entire stream would start off from a different thread.
I have 2 questions:
What is the execution order and number of threads if I call .subscribeOn() into the chain of operations?
Is it a good approach that I follow to check whether the response have the correct state in my below code?
private final Scheduler myScheduler = Schedulers
.newParallel("reactive-pricefetcher", 10, true);
...
...
...
final Mono<Mono<Object>> callableMono = Mono
.fromCallable(() -> {
myHandler.updateCacheResponse(mutableObjList,
dealsRequest.getDealParameters()
);
return Mono.empty();
})
.subscribeOn(myScheduler);
callableMono.subscribe();
boolean stillInProgress = mutableObjList.stream()
.anyMatch(obj -> obj.getStatus() != DONE);
return DealsResponse.builder()
.complete(!stillInProgress)
.itemDeals(mutableObjList)
.build();
PS: I already know that using .subscribeOn() will move the entire stream chain into a different thread when subscribe() invoked.
I have the following code:
public Flux<Offer> getAllFilteredOffers(Map<String, String> searchParams) {
Flux<ProductProperties> productProperties = productPropertiesService.findProductPropertiesBySearchCriteria(searchParams);
Flux<Product> products = productService.findProductsByPropertyId(productProperties);
Flux<Product> productsByAvailability = productService.getAllProductsByAvailability(products, searchParams);
Flux<Offer> offers = offerRepository.findByPropertiesIds(productsByAvailability);
return offers;
This method:
productService.getAllProductsByAvailability(products, searchParams);
looks like:
public Flux<Product> getAllProductsByAvailability(Flux<Product> products,
Map<String, String> searchParams) {
How to pass List<Product> to getAllProductsByAvailability to keep non-blocking operations?
I've read that map is blocking and should be avoided.
Maybe something like that?
Flux
.just(productPropertiesService.findProductPropertiesBySearchCriteria(searchParams))
.flatMap(productProperties -> productService.findProductsByPropertyId(productProperties))
.flatMap(products -> productService.getAllProductsByAvailability(Flux.create(products)?????????, searchParams))
???
I'm not expert in Webflux, currently I'm trying to figure out how to handle problems like: I have Flux but in a second step I need to pull some data from the previous Flex<> object - keeping non-blocking stream.
Than you!
I don't know where you read about map, but if you look at the official documenation Webflux map operator there is nothing about blocking, it just uses synchronous function to each item.
Use this code:
productPropertiesService.findProductPropertiesBySearchCriteria(searchParams)
.flatMap(productProperties -> productService.findProductsByPropertyId(productProperties))
.collectList() (1)
.flatMapMany(products -> productService.getAllProductsByAvailability(Flux.fromIterable(products), searchParams)) (2)
1) collect all elements to List and convert to Mono>
2) create FLux from List and provide it as a parameter, flatMapMany transform Mono to Flux
This works:
public Long getMaxSalary(List<CompletableFuture<EmployeeData>> futures) {
CompletableFuture<Void> allDoneFuture = CompletableFuture.allOf(futures.toArray(new CompletableFuture[futures.size()]));
CompletableFuture<List<EmployeeData>> employeeDataList = allDoneFuture.thenApply(v ->
futures.stream()
.map(f -> f.join())
.collect(Collectors.toList()));
List<EmployeeData> rc = employeeDataList.get();
OptionalLong op = rc.stream().mapToLong(r -> r.salary()).max();
return op.getAsLong();
}
trying to make this concise throwing compiler errors in IDE. I cannot figure out what the error is. I am trying to combine it in one stream.
public Long getMaxSalary(List<CompletableFuture<EmployeeData>> futures) {
CompletableFuture<Void> allDoneFuture = CompletableFuture.allOf(futures.toArray(new CompletableFuture[futures.size()]));
return allDoneFuture.thenApply(v ->
futures.stream()
.map(f -> f.join())
.mapToLong(r -> r.salary())
.max()).getAsLong();
}
There is no point in using allOf and thenApply if you are going to block the current thread anyway with an immediate .get()
return futures.stream()
.map(CompletableFuture::join)
.mapToLong(EmployeeData::salary)
.max()
.getAsLong(); // or throw if futures is empty
allOf approach would be useful if you wanted to return CompletableFuture<Long> and let the clients of your method decide when and in what thread to await completion.
Try this out,
return allDoneFuture.thenApply(v -> futures.stream().map(f -> f.join())).get()
.mapToLong(empData -> empData.salary()).max().getAsLong();
Well, this sounds counter-intuitive to what reactive programming is, but I am unable to comprehend a way to handle nulls/exceptions.
private static class Data {
public Mono<String> first() {
return Mono.just("first");
}
public Mono<String> second() {
return Mono.just("second");
}
public Mono<String> empty() {
return Mono.empty();
}
}
I understand that fundamentally unless a publisher publishes an event, a subscriber will not act. So a code like this would work.
Data data = new Data();
data.first()
.subscribe(string -> Assertions.assertThat(string).isEqualTo("first"));
And if the first call returns empty, I can do this.
Data data = new Data();
data.empty()
.switchIfEmpty(data.second())
.subscribe(string -> Assertions.assertThat(string).isEqualTo("second"));
But how do I handle a case when both the calls return empty (typically this is an exception scenario that would need to be propagated to the user).
Data data = new Data();
data.empty()
.switchIfEmpty(data.empty())
.handle((string, sink) -> Objects.requireNonNull(string))
.block();
The handle is not called in the above example since no event was published.
as JB Nizet pointed out, you can chain in a second switchIfEmpty with a Mono.error.
Or, if you're fine with a NoSuchElementException, you could chain in single(). It enforces a strong contract of exactly one element, otherwise propagating that standard exception.