Kotlin coroutine observer - spring-boot

I am developing GRPC server with Spring, Kotlin and Coroutines. My rpc service looks like this:
override fun authToken(request: AuthTokenRequest): Flow<AuthTokenResponse> {
return flow<AuthTokenResponse> {
while (true) {
delay(1000)
emit(AuthTokenResponse.newBuilder().setToken("Hello").build())
}
}
}
I wanna wait until somebody else change values in DB, connect to server and etc. in this case i need to emit new Response. Meanwhile clients still hangs on stream. What concept or design pattern I should use? Thanks for replies

Related

Can Reactive Kafka Receiver work with non-reactive Elasticsearch client?

Below is a sample code which uses reactor-kafka and reads data from a topic (with retry logic) which has records published via a non-reactive producer. Inside my doOnNext() consumer I am using non-reactive elasticsearch client which indexes the record in the index. So I have few questions that I am still unclear about :
I know that consumers and producers are independent decoupled systems, but is it recommended to have reactive producer as well whose consumers are reactive?
If I am using something that is non-reactive, in this case Elasticsearch client org.elasticsearch.client.RestClient, does the "reactiveness" of the code work? If it does or does not, how do I test it? (By "reactiveness", I mean non blocking IO part of it i.e. if I spawn three reactive-consumers and one is latent for some reason, the thread should be unblocked and used for other reactive consumer).
In general the question is, if I wrap some API with reactive clients should the API be reactive as well?
public Disposable consumeRecords() {
long maxAttempts = 3, duration = 10;
RetryBackoffSpec retrySpec = Retry.backoff(maxAttempts, Duration.ofSeconds(duration)).transientErrors(true);
Consumer<ReceiverRecord<K, V>> doOnNextConsumer = x -> {
// use non-reactive elastic search client and index record x
};
return KafkaReceiver.create(receiverOptions)
.receive()
.doOnNext(record -> {
try {
// calling the non-reactive consumer
doOnNextConsumer.accept(record);
} catch (Exception e) {
throw new ReceiverRecordException(record, e);
}
record.receiverOffset().acknowledge();
})
.doOnError(t -> log.error("Error occurred: ", t))
.retryWhen(retrySpec)
.onErrorContinue((e, record) -> {
ReceiverRecordException receiverRecordException = (ReceiverRecordException) e;
log.error("Retries exhausted for: " + receiverRecordException);
receiverRecordException.getRecord().receiverOffset().acknowledge();
})
.repeat()
.subscribe();
}
Got some understanding around it.
Reactive KafkaReceiver will internally call some API; if that API is blocking API then even if KafkaReceiver is "reactive" the non-blocking IO will not work and the receiver thread will be blocked because you are calling Blocking API / non-reactive API.
You can test this out by creating a simple server (which blocks calls for sometime / sleep) and calling that server from this receiver

Can we use server sent events in nestjs without using interval?

I'm creating few microservices using nestjs.
For instance I have x, y & z services all interconnected by grpc but I want service x to send updates to a webapp on a particular entity change so I have considered server-sent-events [open to any other better solution].
Following the nestjs documentation, they have a function running at n interval for sse route, seems to be resource exhaustive. Is there a way to actually sent events when there's a update.
Lets say I have another api call in the same service that is triggered by a button click on another webapp, how do I trigger the event to fire only when the button is clicked and not continuously keep sending events. Also if you know any idiomatic way to achieve this which getting hacky would be appreciated, want it to be last resort.
[BONUS Question]
I also considered MQTT to send events. But I get a feeling that it isn't possible for a single service to have MQTT and gRPC. I'm skeptical of using MQTT because of its latency and how it will affect internal message passing. If I could limit to external clients it would be great (i.e, x service to use gRPC for internal connections and MQTT for webapp just need one route to be exposed by mqtt).
(PS I'm new to microservices so please be comprehensive about your solutions :p)
Thanks in advance for reading till end!
You can. The important thing is that in NestJS SSE is implemented with Observables, so as long as you have an observable you can add to, you can use it to send back SSE events. The easiest way to work with this is with Subjects. I used to have an example of this somewhere, but generally, it would look something like this
#Controller()
export class SseController {
constructor(private readonly sseService: SseService) {}
#SSE()
doTheSse() {
return this.sseService.sendEvents();
}
}
#Injectable()
export class SseService {
private events = new Subject();
addEvent(event) {
this.events.next(event);
}
sendEvents() {
return this.events.asObservable();
}
}
#Injectable()
export class ButtonTriggeredService {
constructor(private readonly sseService: SseService) {}
buttonClickedOrSomething() {
this.sseService.addEvent(buttonClickedEvent);
}
}
Pardon the pseudo-code nature of the above, but in general it does show how you can use Subjects to create observables for SSE events. So long as the #SSE() endpoint returns an observable with the proper shape, you're golden.
There is a better way to handle events with SSE of NestJS:
Please see this repo with code example:
https://github.com/ningacoding/nest-sse-bug/tree/main/src
Where basically you have a service:
import {Injectable} from '#nestjs/common';
import {fromEvent} from "rxjs";
import {EventEmitter} from "events";
#Injectable()
export class EventsService {
private readonly emitter = new EventEmitter();
subscribe(channel: string) {
return fromEvent(this.emitter, channel);
}
emit(channel: string, data?: object) {
this.emitter.emit(channel, {data});
}
}
Obviously, channel can be any string, as recommendation use path style.
For example: "events/for/<user_id>" and users subscribed to that channel will receive only the events for that channel and only when are fired ;) - Fully compatible with #UseGuards, etc. :)
Additional note: Don't inject any service inside EventsService, because of a known bug.
#Sse('sse-endpoint')
sse(): Observable<any> {
//data have to strem
const arr = ['d1','d2', 'd3'];
return new Observable((subscriber) => {
while(arr.len){
subscriber.next(arr.pop()); // data have to return in every chunk
}
if(arr.len == 0) subscriber.complete(); // complete the subscription
});
}
Yes, this is possible, instead of using interval, we can use event emitter.
Whenever the event is emitted, we can send back the response to the client.

How to tell RSocket to read data stream by Java 8 Stream which backed by Blocking queue

I have the following scenario whereby my program is using blocking queue to process message asynchronously. There are multiple RSocket clients who wish to receive this message. My design is such a way that when a message arrives in the blocking queue, the stream that binds to the Flux will emit. I have tried to implement this requirement as below, but the client doesn't receive any response. However, I could see Stream supplier getting triggered correctly.
Can someone pls help.
#MessageMapping("addListenerHook")
public Flux<QueryResult> addListenerHook(String clientName){
System.out.println("Adding Listener:"+clientName);
BlockingQueue<QueryResult> listenerQ = new LinkedBlockingQueue<>();
Datalistener.register(clientName,listenerQ);
return Flux.fromStream(
()-> Stream.generate(()->streamValue(listenerQ))).map(q->{
System.out.println("I got an event : "+q.getResult());
return q;
});
}
private QueryResult streamValue(BlockingQueue<QueryResult> inStream){
try{
return inStream.take();
}catch(Exception e){
return null;
}
}
This is tough to solve simply and cleanly because of the blocking API. I think this is why there aren't simple bridge APIs here to help you implement this. You should come up with a clean solution to turn the BlockingQueue into a Flux first. Then the spring-boot part becomes a non-event.
This is why the correct solution is probably involving a custom BlockingQueue implementation like ObservableQueue in https://www.nurkiewicz.com/2015/07/consuming-javautilconcurrentblockingque.html
A alternative approach is in How can I create reactor Flux from a blocking queue?
If you need to retain the LinkedBlockingQueue, a starting solution might be something like the following.
val f = flux<String> {
val listenerQ = LinkedBlockingQueue<QueryResult>()
Datalistener.register(clientName,listenerQ);
while (true) {
send(bq.take())
}
}.subscribeOn(Schedulers.elastic())
With an API like flux you should definitely avoid any side effects before the subscribe, so don't register your listener until inside the body of the method. But you will need to improve this example to handle cancellation, or however you cancel the listener and interrupt the thread doing the take.

Mono returned by ServerRequest.bodyToMono() method not extracting the body if I return ServerResponse immediately

I am using web reactive in spring web flux. I have implemented a Handler function for POST request. I want the server to return immediately. So, I have implemeted the handler as below -:
public class Sample implements HandlerFunction<ServerResponse>{
public Mono<ServerResponse> handle(ServerRequest request) {
Mono bodyMono = request.bodyToMono(String.class);
bodyMono.map(str -> {
System.out.println("body got is " + str);
return str;
}).subscribe();
return ServerResponse.status(HttpStatus.CREATED).build();
}
}
But the print statement inside the map function is not getting called. It means the body is not getting extracted.
If I do not return the response immediately and use
return bodyMono.then(ServerResponse.status(HttpStatus.CREATED).build())
then the map function is getting called.
So, how can I do processing on my request body in the background?
Please help.
EDIT
I tried using flux.share() like below -:
Flux<String> bodyFlux = request.bodyToMono(String.class).flux().share();
Flux<String> processFlux = bodyFlux.map(str -> {
System.out.println("body got is");
try{
Thread.sleep(1000);
}catch (Exception ex){
}
return str;
});
processFlux.subscribeOn(Schedulers.elastic()).subscribe();
return bodyFlux.then(ServerResponse.status(HttpStatus.CREATED).build());
In the above code, sometimes the map function is getting called and sometimes not.
As you've found, you can't just arbitrarily subscribe() to the Mono returned by bodyToMono(), since in that case the body simply doesn't get passed into the Mono for processing. (You can verify this by putting a single() call in that Mono, it'll throw an exception since no element will be emitted.)
So, how can I do processing on my request body in the background?
If you really still want to just use reactor to do a long task in the background while returning immediately, you can do something like:
return request.bodyToMono(String.class).doOnNext(str -> {
Mono.just(str).publishOn(Schedulers.elastic()).subscribe(s -> {
System.out.println("proc start!");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("proc end!");
});
}).then(ServerResponse.status(HttpStatus.CREATED).build());
This approach immediately publishes the emitted element to a new Mono, set to publish on an elastic scheduler, that is then subscribed in the background. However, it's kind of ugly, and it's not really what reactor is designed to do. You may be misunderstanding the idea behind reactor / reactive programming here:
It's not written with the idea of "returning a quick result and then doing stuff in the background" - that's generally the purpose of a work queue, often implemented with something like RabbitMQ or Kafka. It's "raison d'ĂȘtre" is instead to be non-blocking, so a single thread is never idly blocked, waiting for something else to complete.
The map() method isn't designed for side effects, it's designed to transform each object into another. For side effects, you want doOnNext() instead;
Reactor uses a single thread by default, so your "additional processing" in your map() method would still block that thread.
If your application is for anything more than quick demo purposes, and/or you need to make heavy use of this pattern, then I'd seriously consider setting up a proper work queue instead.
This is not possible.
Web servers (including Reactor Netty, Tomcat, etc) clean up and recycle resources when request processing is done. This means that when your controller handler is done, the HTTP resources, the request itself, reusable buffers, etc are recycled or closed. At that point, you cannot read from the request body anymore.
In your case, you need to read and buffer the whole request body first, then return a response and kick off a task for processing that request in a separate execution.

How to convert a vert.x ReactiveReadStream<Document> to ReactiveWriteStream<Buffer>

I have a straightforward use case. This is to make a rest call, query mongo and then return an arbitrarily large stream of data back to the client, all with reactive streams type back pressure management.
This was quite easy to achieve using Spring WebFlux and Reactor. I am now trying to achieve the same goal using vert.x, as a comparison of ease of implementation.
Having found the vert.x mongo client to be lacking any support for managing back pressure, I am now attempting to use the WebFlux mongo client and then pump the data back through the vert.x HttpResponse, as shown in the following code:
public class MyMongoVerticle extends AbstractVerticle {
ReactiveMongoOperations operations;
public void start() throws Exception {
final Router router = Router.router(vertx);
router.route().handler(BodyHandler.create());
router.get("/myUrl").handler(ctx -> {
// WebFlux mongo operations returns a ReactiveStreams compatible entity
Flux<Document> mongoStream = operations.findAll(Document.class, "myCollection");
ReactiveReadStream rrs = ReactiveReadStream.readStream();
// rrs is ReactiveStream streams subscriber
mongoStream.subscribe(rrs);
// Pump pumps the rrs (ReactiveReadStream) to the HttpServerResponse (ReactiveWriteStream)
Pump pump = Pump.pump(rrs, ctx.response());
pump.start();
});
vertx.createHttpServer().requestHandler(router::accept).listen(8777);
}
}
The issue I have encountered is that the HttpServerResponse implements ReactiveWriteStream<Buffer> so is expecting a Buffer rather than a stream of Document's. The result is a ClassCaseException.
The question I have is how can I convert this stream of Documents into a into a ReactiveWriteStream<Buffer>? There may be another better way to do this, so I'm open to other suggestions on how to achieve this.
Pump won't work for you, as it doesn't support transformations currently. You'll have to implement pump by yourself. Luckily, this shouldn't be too hard:
Flux<Document> mongoStream = operations.findAll(Document.class, "myCollection");
ReactiveReadStream<Document> rrs = ReactiveReadStream.readStream();
mongoStream.subscribe(rrs);
HttpServerResponse outStream = ctx.response();
// Changes start here
rrs.handler(d -> {
if (outStream.writeQueueFull()) {
outStream.drainHandler((s) -> {
rrs.resume();
});
rrs.pause();
}
else {
outStream.write(d.toJson());
}
}).endHandler(h -> {
outStream.end();
});
Note that I wouldn't expect this to be more effective than "native" WebFlux implementation.
Also, JSON in this example will be mangled, as I don't wrap it in proper JSON Array

Resources