Is it possible to rewrite such code in Java8 streams? - java-8

I was trying to adapt it to Java 8 streams:
public boolean isProcessionRestricted(CommonMessage message) {
if (message.getClass() == BonusMessage.class) {
log.debug("Staring validating BonusMessage: '{}'", message);
BonusMessage bonusMessage = (BonusMessage) message;
Optional<BonusTriggerConfig> config = bonusTriggerConfigRepository.getCached();
if (config.isPresent()) {
BonusTriggerConfig bonusTriggerConfig = config.get();
List<BonusRewardConfig> rewardConfigs = bonusTriggerConfig.getRewardConfigs();
if (!rewardConfigs.isEmpty()) {
return rewardConfigs.stream()
.map(BonusRewardConfig::getBonusTypeId)
.noneMatch(bonusTypeId -> bonusTypeId == bonusMessage.getBonusTypeId());
} else {
return false;
}
} else {
return false;
}
}
return false;
}
but I faced the problem with checking if collection is empty in streams. The "streammest" thing which I get looks like this:
#Override
public boolean isProcessionRestricted(CommonMessage message) {
if (message.getClass() == BonusMessage.class) {
log.debug("Staring validating BonusMessage: '{}'", message);
BonusMessage bonusMessage = (BonusMessage) message;
return bonusTriggerConfigRepository.getCached()
.map(bonusTriggerConfig -> {
List<BonusRewardConfig> rewardConfigs = bonusTriggerConfig.getRewardConfigs();
return !rewardConfigs.isEmpty() && rewardConfigs.stream()
.map(BonusRewardConfig::getBonusTypeId)
.noneMatch(bonusTypeId -> bonusTypeId == bonusMessage.getBonusTypeId());
}).orElse(false);
}
return false;
}
but still I don't like it.

You can use Optional#filter to filtering the empty collection instead, for example:
return bonusTriggerConfigRepository.getCached()
.map(bonusTriggerConfig -> bonusTriggerConfig.getRewardConfigs())
// v--- filter the empty configs out
.filter(rewardConfigs-> !rewardConfigs.isEmpty())
.map(rewardConfigs -> rewardConfigs.stream()
.map(BonusRewardConfig::getBonusTypeId)
.noneMatch(bonusTypeId -> bonusTypeId == bonusMessage.getBonusTypeId())
)
.orElse(false);

Regardless #Joe C's comments, I'm not sure if it's better to move this OP to Code Review. But I learned something from OP about how to write concise code with Java 8 Stream API, First, Here is my first try with StreamEx (I didn't try with native stream APIs because it's too boring to me...)
public boolean isProcessionRestricted(CommonMessage message) {
return StreamEx.of(message)
.select(BonusMessage.class)
.peek(m -> log.debug("Staring validating BonusMessage: '{}'", m))
.anyMatch(m -> bonusTriggerConfigRepository.getCached()
.map(btc -> StreamEx.of(btc.getRewardConfigs())
.noneMatch(brc -> brc.getBonusTypeId() == m.getBonusTypeId())).orElse(false));
}
(If there are some compile errors, please help to update my answer).
But logic looks too complicate to me. Here are the codes I may write if I was the programmer:
public boolean isProcessionRestricted(CommonMessage message) {
if (!(message insanceof BonusMessage && bonusTriggerConfigRepository.getCached().isPresent())) {
return false;
}
log.debug("Staring validating BonusMessage: '{}'", message);
int restrictedBonusTypeId = ((BonusMessage) message).getBonusTypeId();
List<BonusRewardConfig> rewardConfigs = bonusTriggerConfigRepository.getCached().get().getRewardConfigs();
return rewardConfigs.size() > 0 && rewardConfigs.stream()
.noneMatch(brc -> brc.getBonusTypeId() == restrictedBonusTypeId);
}
What I learned or suggest:
Forget Stream API, It just looks cool, but not that cool. Writing concise code with/without Stream API is really cool.
Although I like Lambdas and Stream API. However, how to write concise code with Stream API is much more challenge to me, comparing the for-loop/if/while. it's better to go over the Stream API again and again, and do more practices before you start to use the stream API in real product.
Always prefer to StreamEx. Sometimes it's really boring to write code with native Stream API. StreamEx provides a lot of shorter and convenient ways to accomplish your tasks.
Forget stream

Related

Webflux Reactor - Checking if all items in the original Flux were successful

i currently have this Reactor code where im not sure im doing this the idiomatic way.
My requirements are that for a list of accountIds, I make 2 requests which are done one after the other. One to delete the account data, the other is to trigger an event afterwards. The second request is only made if the first one succeeds.
At the end, i would like to know if all of the sets of requests were successful. I have achieved this with the code below.
Flux.fromIterable(List.of("accountId", "someOtherAccountId"))
.flatMap(accountId -> someWebclient.deleteAccountData(accountId)
.doOnSuccess(response -> log.info("Delete account data success"))
.onErrorResume(e -> {
log.info("Delete account data failure");
return Mono.empty();
})
.flatMap(deleteAccountDataResponse -> {
return eventServiceClient.triggerEvent("deleteAccountEvent")
.doOnSuccess(response -> log.info("Delete account event success"))
.onErrorResume(e -> {
log.info("Delete account event failure");
return Mono.empty();
});
}))
.count()
.subscribe(items -> {
if (items.intValue() == accountIdsToForget.size()) {
log.info("All accountIds deleted and events triggered successfully");
} else {
log.info("Not all accoundIds deleted and events triggered successfully");
}
});
Is there a better way to achieve this?
As the webclients can return errors for 4xx and 5xx, i am having to swallow that up with onErrorResume in order to prevent the error from bubbling up. Similarly, the only way i have been able to capture if all of the accountIds have been processed is by checking the size of the Flux against the size of the List which it was started with
Disclaimer: it is a little subjective how to provide a better solution. In this answer, I will provide my personal choice of error handling, that, in my opinion, provides best extensibility and readability.
I would model a result/report object (kind like Either in functional paradigm), so that each success or error is sent as a "next signal" downstream.
It requires a little more code/boilerplate, but the benefit is that we end up with a flow of successes and failures produced on the fly. It allows to detect errors early, and ease both error recovery and pipeline extensibility (for example, it is then very easy to switch between fail-fast and error silencing strategies, or to build complex reports from upstream results, etc.).
Let's try to apply this to your example. For simplicity, I will mock deletion and notification service with two methods that return an empty result on success:
static Mono<Void> delete(String account) {
if (account.isBlank()) return Mono.error(new IllegalArgumentException("EMPTY ACCOUNT !"));
else return Mono.empty();
}
static Mono<Void> notify(String event) {
if (event.isBlank()) return Mono.error(new IllegalArgumentException("UNKNOWN EVENT !"));
return Mono.empty();
}
I would make this steps:
Create result model:
sealed interface Result { String accountId(); }
sealed interface Error extends Result { Throwable cause(); }
record DeletionError(String accountId, Throwable cause) implements Error {}
record NotifyError(String accountId, Throwable cause) implements Error {}
record Success(String accountId) implements Result {}
Then, we can prepare our pipeline that will wrap our delete and notify operations to make them produce result objects:
static Flux<Result> deleteAndNotify(Flux<String> accounts) {
Function<String, Mono<Result>> safeDelete = account
-> delete(account)
.<Result>thenReturn(new Success(account))
.onErrorResume(err -> Mono.just(new DeletionError(account, err)));
Function<Result, Mono<Result>> safeNotify = deletionResult -> deletionResult instanceof Success
? notify("deleteAccountEvent")
.thenReturn(deletionResult)
.onErrorResume(err -> Mono.just(new NotifyError(deletionResult.accountId(), err)))
: Mono.just(deletionResult);
return accounts.flatMap(safeDelete)
.flatMap(safeNotify);
}
With the code above, you can already receive errors as they arrive. A simple program:
var results = deleteAndNotify(Flux.just("a1", "a2", " ", "a3"));
results.subscribe(System.out::println);
prints:
Success[accountId=a1]
Success[accountId=a2]
DeletionError[accountId= , cause=java.lang.IllegalArgumentException: EMPTY ACCOUNT !]
Success[accountId=a3]
Now, it becomes very simple to adapt your flow of control:
if we want to keep track of errors only, we just have to chain a simple filter: results.filter(it -> it instanceof Error)
To fail-fast, just map error result to a real error: results.flatMap(result -> result instanceof Error err ? Mono.error(err.cause()) : Mono.just(result))
You want to get an idea of the flow throughput ? Just time it: results.timed()
etc.
And if you want to count, you can now directly count errors and successes on the fly. It provides a few advantages:
You are not forced to know the number of accounts to delete in advance to verify if any error happened
You can have a live monitoring of the failed/succeeded operations
We can program counting like that:
record Count(long success, long deleteFailed, long notifyFailed) {
Count() { this(0, 0, 0); }
Count newSuccess() { return new Count(success + 1, deleteFailed, notifyFailed); }
Count newDeletionFailure() { return new Count(success, deleteFailed + 1, notifyFailed); }
Count newNotifyFailure() { return new Count(success, deleteFailed, notifyFailed + 1); }
}
var counting = results.scanWith(Count::new, (count, result) -> switch (result) {
case Success s -> count.newSuccess();
case DeletionError de -> count.newDeletionFailure();
case NotifyError ne -> count.newNotifyFailure();
});
Subscribing to this counting flow using the same input accounts as above would produce that kind of input:
Count[success=0, deleteFailed=0, notifyFailed=0]
Count[success=1, deleteFailed=0, notifyFailed=0]
Count[success=2, deleteFailed=0, notifyFailed=0]
Count[success=2, deleteFailed=1, notifyFailed=0]
Count[success=3, deleteFailed=1, notifyFailed=0]
If you want only a total count, then either use counting.last() or replace scanWith by reduceWith operator.
I hope this answer is of any help to you to better model pipelines/DAG/flows of operations.

Spring webflux with multiple sequential API call and convert to flux object without subscribe and block

I am working on spring reactive and need to call multiple calls sequentially to other REST API using webclient. The issue is I am able to call multiple calls to other Rest API but response am not able to read without subscribe or block. I can't use subscribe or block due to non reactive programming. Is there any way, i can merge while reading the response and send it as flux.
Below is the piece of code where I am stuck.
private Flux<SeasonsDto> getSeasonsInfo(List<HuntsSeasonsMapping> l2, String seasonsUrl) {
for (HuntsSeasonsMapping s : l2)
{
List<SeasonsJsonDto> list = huntsSeasonsProcessor.appendSeaosonToJson(s.getSeasonsRef());
for (SeasonsJsonDto sjdto:list)
{
Mono<SeasonsDto> mono =new SeasonsAdapter("http://localhost:8087/").callToSeasonsAPI(sjdto.getSeasonsRef());
//Not able to read stream without subscribe an return as Flux object
}
public Mono<SeasonsDto> callToSeasonsAPI(Long long1) {
LOGGER.debug("Seasons API call");
return this.webClient.get().uri("hunts/seasonsInfo/"
+long1).header("X-GoHunt-LoggedIn-User",
"a4d4b427-c716-458b-9bb5-9917b6aa30ff").retrieve().bodyToMono(SeasonsDto.class);
}
Please help to resolve this.
You need to combine the reactive streams using operators such as map, flatMap and concatMap.
private Flux<SeasonsDto> getSeasonsInfo(List<HuntsSeasonsMapping> l2, String seasonsUrl) {
List<Mono<SeasonsDto>> monos = new ArrayList<>();
for (HuntsSeasonsMapping s : l2) {
List<SeasonsJsonDto> list = huntsSeasonsProcessor.appendSeaosonToJson(s.getSeasonsRef());
for (SeasonsJsonDto sjdto:list) {
Mono<SeasonsDto> mono =new SeasonsAdapter("http://localhost:8087/").callToSeasonsAPI(sjdto.getSeasonsRef());
//Not able to read stream without subscribe an return as Flux object
monos.add(mono);
}
}
return Flux.fromIterable(monos).concatMap(mono -> mono);
}
This can further be improved using the steam API, which I suggest you look into, but I didn't want to change too much of your existing code.
I have figured how to do this. I have completely rewrite the code and change in reactive. It means all the for loop has been removed. Below is the code for the same and may be help for others.
public Flux<SeasonsDto> getAllSeasonDetails(String uuid) {
return hunterRepository.findByUuidAndIsPrimaryAndDeleted(uuid, true, false).next().flatMapMany(h1 -> {
return huntsMappingRepository.findByHunterIdAndDeleted(h1.getId(), false).flatMap(k -> {
return huntsMappingRepository.findByHuntReferrenceIdAndDeleted(k.getHuntReferrenceId(), false)
.flatMap(l2 -> {
return huntsSeasonsProcessor.appendSeaosonToJsonFlux(l2.getSeasonsDtl()).flatMap(fs -> {
return seasonsAdapter.callSeasonsAPI(fs.getSeasonsRef(), h1.getId(), uuid).map(k->{
return k;
});
});
});
});
});
}

Downlolad and save file from ClientRequest using ExchangeFunction in Project Reactor

I have problem with correctly saving a file after its download is complete in Project Reactor.
class HttpImageClientDownloader implements ImageClientDownloader {
private final ExchangeFunction exchangeFunction;
HttpImageClientDownloader() {
this.exchangeFunction = ExchangeFunctions.create(new ReactorClientHttpConnector());
}
#Override
public Mono<File> downloadImage(String url, Path destination) {
ClientRequest clientRequest = ClientRequest.create(HttpMethod.GET, URI.create(url)).build();
return exchangeFunction.exchange(clientRequest)
.map(clientResponse -> clientResponse.body(BodyExtractors.toDataBuffers()))
//.flatMapMany(clientResponse -> clientResponse.body(BodyExtractors.toDataBuffers()))
.flatMap(dataBuffer -> {
AsynchronousFileChannel fileChannel = createFile(destination);
return DataBufferUtils
.write(dataBuffer, fileChannel, 0)
.publishOn(Schedulers.elastic())
.doOnNext(DataBufferUtils::release)
.then(Mono.just(destination.toFile()));
});
}
private AsynchronousFileChannel createFile(Path path) {
try {
return AsynchronousFileChannel.open(path, StandardOpenOption.CREATE);
} catch (Exception e) {
throw new ImageDownloadException("Error while creating file: " + path, e);
}
}
}
So my question is:
Is DataBufferUtils.write(dataBuffer, fileChannel, 0) blocking?
What about when the disk is slow?
And second question about what happens when ImageDownloadException occurs ,
In doOnNext I want to release the given data buffer, is that a good place for this kind operation?
I think also this line:
.map(clientResponse -> clientResponse.body(BodyExtractors.toDataBuffers()))
could be blocking...
Here's another (shorter) way to achieve that:
Flux<DataBuffer> data = this.webClient.get()
.uri("/greeting")
.retrieve()
.bodyToFlux(DataBuffer.class);
Path file = Files.createTempFile("spring", null);
WritableByteChannel channel = Files.newByteChannel(file, StandardOpenOption.WRITE);
Mono<File> result = DataBufferUtils.write(data, channel)
.map(DataBufferUtils::release)
.then(Mono.just(file));
Now DataBufferUtils::write operations are not blocking because they use non-blocking IO with channels. Writing to such channels means it'll write whatever it can to the output buffer (i.e. may write all the DataBuffer or just part of it).
Using Flux::map or Flux::doOnNext is the right place to do that. But you're right, if an error occurs, you're still responsible for releasing the current buffer (and all the remaining ones). There might be something we can improve here in Spring Framework, please keep an eye on SPR-16782.
I don't see how your last sample shows anything blocking: all methods return reactive types and none are doing blocking I/O.

Retry Logic in case of failure - Spring Reactor

How do i unit test RetryWhen,
public Mono<List<Transaction>> get(String id) {
return class
.get(id).log()
.retryWhen(throwableFlux -> throwableFlux)
.zipWith(Flux.range(min, max + 1), (error, retry) -> new RetryException(error, retry))
.flatMap(retryException -> {
if(retryException.getRetries() == max + 1) {
throw Exceptions.propagate(retryException.getThrowable());
} else if (isClientException(retryException.getThrowable())){
return Flux.empty();
}
return Mono.delay(Duration.ofMinutes( new Double(multiplier * retryException.getRetries()).longValue()));
}));
}
How do i use StepVerifier to test this method?
Another way to implement retry logic,
throwableFlux.takeWhile(throwable -> !isClientException(throwable))
.flatMap(e -> {
if(count.get() >= max + 1) {
throw Exceptions.propagate(e);
}
LOG.info("Retrying in..");
return Mono.delay(Duration.ofMinutes(new Double(multiplier * count.getAndAdd(1)).longValue()));
});
Do you mean testing the RetryHelper applied through retryWhen?
You can certainly use StepVerifier to test such a retryWhen containing sequence, yes. You can also check the number of (re)subscriptions by using an AtomicLong coupled to a doOnSubscribe just before the retryWhen (it will help assert the number of subscriptions made to the source being retried).
Note that we just added such a builder utility for retryWhenand repeatWhen, but in the reactor-extra project (currently in 3.1.0.BUILD-SNAPSHOT)
This is how i was able to test this code.
FirstStep.expectSubscription().expectNoEvent(java.time.Duration.ofMinutes(1)).expectNoEvent(Duration.ofMinutes(3)).verifyError()
We could have used thenAwait(Duration.ofDays(1)) above, but
expectNoEvent has the benefit of guaranteeing that nothing happened
earlier that it should have.
http://projectreactor.io/docs/core/snapshot/reference/docs/index.html#error.handling

Spring Integration and returning schema validation errors

We are using Spring Integration to process a JSON payload passed into a RESTful endpoint. As part of this flow we are using a filter to validate the JSON:
.filter(schemaValidationFilter, s -> s
.discardFlow(f -> f
.handle(message -> {
throw new SchemaValidationException(message);
}))
)
This works great. However, if the validation fails we want to capture the parsing error and return that to the user so they can act on the error. Here is the overridden accept method in the SchemaValidationFilter class:
#Override
public boolean accept(Message<?> message) {
Assert.notNull(message);
Assert.isTrue(message.getHeaders().containsKey(TYPE_NAME));
String historyType = (String)message.getHeaders().get(TYPE_NAME);
JSONObject payload = (JSONObject) message.getPayload();
String jsonString = payload.toJSONString();
try {
ProcessingReport report = schemaValidator.validate(historyType, payload);
return report.isSuccess();
} catch (IOException | ProcessingException e) {
throw new MessagingException(message, e);
}
}
What we have done is in the catch block we throw a MessageException which seems to solve the problem. However this seems to break what a filter should do (simply return a true or false).
Is there a best practice for passing the error details from the filter to the client? Is the filter the right solution for this use case?
Thanks for your help!
John
I'd say you go correct way. Please, refer to the XmlValidatingMessageSelector, so your JsonValidatingMessageSelector should be similar and must follow the same design.
Since we have a throwExceptionOnRejection option we always can be sure that throwing Exception instead of just true/false is correct behavior.
What Gary says is good, too, but according to the existing logic in that MessageSelector impl we can go ahead with the same and continue to use .filter(), but, of course, already without .discardFlow(), because we won't send invalid message to the discardChannel.
When your JsonValidatingMessageSelector is ready, feel free to contribute it back to the Framework!
It's probably more correct to do the validation in a <service-activator/>...
public Message<?> validate(Message<?> message) {
...
try {
ProcessingReport report = schemaValidator.validate(historyType, payload);
return message;
}
catch (IOException | ProcessingException e) {
throw new MessagingException(message, e);
}
}
...since you're never really filtering.

Resources