How to segregate data from Flux without the underlying call again? - spring-boot

I am working with reactive spring-boot framework and I have a flow like this in my application, (psuedo code):
Flux<String> keys; // getting flux of keys from some part of code
Flux<RedisData> response = getDataFromRedisForKeys(keys);
Flux<RedisData> responseTypeA = filterATypeResponseFromRedisDataFlux(response);
Flux<RedisData> responseTypeB = filterBTypeResponseFromRedisDataFlux(response);
Flux<RedisData> responseTypeC = filterCTypeResponseFromRedisDataFlux(response);
Flux<RedisData> responseTypeD = filterDTypeResponseFromRedisDataFlux(response);
Now, when I am trying to do flatMap ops on the 4 fluxes after the filter, I am seeing that data from redis is being get 4 times, what I want is that we get it once reactively and segregate it out without blocking.

getDataFromRedisForAllKeys is according to your incoming key returns the corresponding redis data needs to be yourself
You just need to search through redis for the key you need first and then filter or subscribe
You can try using the subscribe method
And process data in it
Flux result = getDataFromRedisForAllKeys(keys);
result.subscribe(data -> {
switch (data.key){
case "A":
// do something
case "B":
// do something
case "C":
// do something
case "D":
// do something
}
});
Or use the Flux filter method
Flux<RedisData> responseTypeA = result.filter(redisData -> "A".equals(redisData.key));

Related

gRPC endpoint that sends initial data and afterwards stream of data

I want to define a gRPC endpoint, that when called, returns some initial data and afterwards a stream of data. For example in a game it could create a lobby and return some initial data about the lobby to the creator and afterwards stream an event every time a player joins.
I want to achieve something like this:
message LobbyData{
string id = 1;
}
message PlayerJoin{
string id = 2;
}
service LobbyService {
rpc OpenLobbyAndListenToPlayerJoins(Empty) returns (LobbyData, stream PlayerJoin);
}
Unfortunately this is not possible so I have 2 options:
Option 1 (not what a want)
Creating two seperate RPCs, and call them sequentially on the client:
service LobbyService {
rpc OpenLobby(Empty) returns (LobbyData);
rpc ListenToPlayerJoins(LobbyData) returns (stream PlayerJoin);
}
This however creates the problem that players can join the lobby possibly before the second RPC from the client (ListenToPlayerJoins) reaches the server. So on the server we would need additional logic to open the lobby only after the ListenToPlayerJoins RPC from the creator has arrived.
Option 2 (also not what I want)
Use a single RPC with a sum type:
message LobbyDataOrPlayerJoin{
oneof type {
LobbyData lobby_data = 1;
PlayerJoin player_join = 2;
}
}
service LobbyService {
rpc OpenLobbyAndListenToPlayerJoins(Empty) returns (stream LobbyDataOrPlayerJoin);
}
This would allow for just one RPC, where the first element of the stream is a LobbyData object and all subsequent elements are PlayerJoins. What is not nice about this is that all streamed events after the first one are PlayerJoins but the client receives them still as the sum type LobbyDataOrPlayerJoin. Which is not clean.
Both options seem to me like workarounds. Is there a real solution to this problem?

using forkJoin multiple times

I am working on a Project where our client generates almost 500 request simultaneously. I am using the forkJoin to get all the responses as Array.
But the Server after 40-50 request Blocks the requests or sends only errors. I have to split these 500 requests in Chunks of 10 requests and loop over this chunks array and have to call forkJoin for each chunk, and convert observable to Promise.
Is there any way to get rid of this for loop over the chucks?
If I understand right you question, I think you are in a situation similar to this
const clientRequestParams = [params1, params2, ..., params500]
const requestAsObservables = clientRequestParams.map(params => {
return myRequest(params)
})
forkJoin(requestAsObservables).subscribe(
responses => {// do something with the array of responses}
)
and probably the problem is that the server can not load so many requests in parallel.
If my understanding is right and if, as you write, there is a limit of 10 for concurrent requests, you could try with mergeMap operator specifying also the concurrent parameter.
A solution could therefore be the following
const clientRequestParams = [params1, params2, ..., params500]
// use the from function from rxjs to create a stream of params
from(clientRequestParams).pipe(
mergeMap(params => {
return myRequest(params)
}, 10) // 10 here is the concurrent parameter which limits the number of
// concurrent requests on the fly to 10
).subscribe(
responseNotification => {
// do something with the response that you get from one invocation
// of the service in the server
}
)
If you adopt this strategy, you limit the concurrency but you are not guaranteed the order in the sequence of the responses. In other words, the second request can return before the first one has returned. So you need to find some mechanism to link the response to the request. One simple way would be to return not only the response from the server, but also the params which you used to invoke that specific request. In this case the code would look like this
const clientRequestParams = [params1, params2, ..., params500]
// use the from function from rxjs to create a stream of params
from(clientRequestParams).pipe(
mergeMap(params => {
return myRequest(params).pipe(
map(resp => {
return {resp, params}
})
)
}, 10)
).subscribe(
responseNotification => {
// do something with the response that you get from one invocation
// of the service in the server
}
)
With this implementation you would create a stream which notifies both the response received from the server and the params used in that specific invocation.
You can adopt also other strategies, e.g. return the response and the sequence number representing that response, or maybe others.

Spring Webflux: Extract value from Mono

I am new to spring webflux and am trying to perform some arithmetic on the values of two monos. I have a product service that retrieves account information by calling an account service via webClient. I want to determine if the current balance of the account is greater than or equal to the price of the product.
Mono<Account> account = webClientBuilder.build().get().uri("http://account-service/user/accounts/{userId}/",userId)
.retrieve().bodyToMono(Account.class);
//productId is a path variable on method
Mono<Product> product =this.productService.findById(productId);
When I try to block the stream I get an error
block()/blockFirst()/blockLast() are blocking, which is not supported in thread reactor-http-nio-2
//Causes Error
Double accountBalance = account.map(a->a.getBalance()).block():
Double productPrice = product.map(p->p.getPrice()).block();
///Find difference, send response accordingly....
Is this the correct approach of there is another, better way to achieve this? I was also thinking something along the lines of:
Mono<Double> accountBalance = account.map(a->a.getBalance()):
Mono<Double> productPrice = product.map(p->p.getPrice());
Mono<Double> res = accountBalance.zipWith(productPrice,(b,p)-> b-p);
//Something after this.....
You can't use block method on main reactor thread. This is forbidden. block may work when publish mono on some other thread but it's not a case.
Basically your approach with zipping two monos is correct. You can create some helper method to do calculation on them. In your case it may look like:
public boolean isAccountBalanceGreater(Account acc, Product prd) {
return acc.getBalance() >= prd.getPrice();
}
And then in your Mono stream you can pass method reference and make it more readable.
Mono<Boolean> result = account.zipWith(productPrice, this::isAccountBalanceGreater)
The question is what you want to do with that information later. If you want return to your controller just true or false that's fine. Otherwise you may need some other mappings, zippings etc.
Update
return account.zipWith(productPrice, this::createResponse);
...
ResponseEntity createResponse(Account acc, Product prd) {
int responseCode = isAccountBalanceGreater(acc, prd) ? 200 : 500;
return ResponseEntity.status(responseCode).body(prd);
}

Reactive Redis does not continually publish changes to the Flux

I am trying to get live updates on my redis ordered list without success.
It seems like it fetches all the items and just ends on the last item.
I would like the client to keep get updates upon a new order in my ordered list.
What am I missing?
This is my code:
#RestController
class LiveOrderController {
#Autowired
lateinit var redisOperations: ReactiveRedisOperations<String, LiveOrder>
#GetMapping(produces = [MediaType.TEXT_EVENT_STREAM_VALUE], value = "/orders")
fun getLiveOrders(): Flux<LiveOrder> {
val zops = redisOperations?.opsForZSet()
return zops?.rangeByScore("orders", Range.unbounded())
}
}
There is no such feature in Redis. First, reactive retrieval of a sorted set is just getting a snapshot, but your calls are going in a reactive fashion. So you need a subscription instead.
If you opt in for keyspace notifications like this (K - enable keyspace notifications, z - include zset commands) :
config set notify-keyspace-events Kz
And subscribe to them in your service like this:
ReactiveRedisMessageListenerContainer reactiveRedisMessages;
// ...
reactiveRedisMessages.receive(new PatternTopic("__keyspace#0__:orders"))
.map(m -> {
System.out.println(m);
return m;
})
<further processing>
You would see messages like this: PatternMessage{channel=__keyspace#0__:orders, pattern=__keyspace#0__:orders, message=zadd}. It will notify you that something has been added. And you can react on this somehow - get the full set again, or only some part (head/tail). You might even remember the previous set, get the new one and send the diff.
But what I would really suggest is rearchitecting the flow in some way to use Redis Pub/Sub functionality directly. For example: publisher service instead of directly calling zadd will call eval, which will issue 2 commands: zadd orders 1 x and publish orders "1:x" (any custom message you want, maybe JSON).
Then in your code you will subscribe to your custom topic like this:
return reactiveRedisMessages.receive(new PatternTopic("orders"))
.map(LiveOrder::fromNotification);

Enrich each existing value in a cache with the data from another cache in an Ignite cluster

What is the best way to update a field of each existing value in a Ignite cache with data from another cache in the same cluster in the most performant way (tens of millions of records about a kilobyte each)?
Pseudo code:
try (mappings = getCache("mappings")) {
try (entities = getCache("entities")) {
entities.foreach((key, entity) -> entity.setInternalId(mappings.getValue(entity.getExternalId());
}
}
I would advise to use compute and send a closure to all the nodes in the cache topology. Then, on each node you would iterate through a local primary set and do the updates. Even with this approach you would still be better off batching up updates and issuing them with a putAll call (or maybe use IgniteDataStreamer).
NOTE: for the example below, it is important that keys in "mappings" and "entities" caches are either identical or colocated. More information on collocation is here:
https://apacheignite.readme.io/docs/affinity-collocation
The pseudo code would look something like this:
ClusterGroup cacheNodes = ignite.cluster().forCache("mappings");
IgniteCompute compute = ignite.compute(cacheNodes.nodes());
compute.broadcast(() -> {
IgniteCache<> mappings = getCache("mappings");
IgniteCache<> entities = getCache("entities");
// Iterate over local primary entries.
entities.localEntries(CachePeekMode.PRIMARY).forEach((entry) -> {
V1 mappingVal = mappings.get(entry.getKey());
V2 entityVal = entry.getValue();
V2 newEntityVal = // do enrichment;
// It would be better to create a batch, and then call putAll(...)
// Using simple put call for simplicity.
entities.put(entry.getKey(), newEntityVal);
}
});

Resources