Spring Cloud Stream access to raw Stream - spring

Here is my use case, user subscribe to my stream using websocket (GraphQl with subscription), I need to return an instance of org.reactivestreams.Publisher (which should be my kafka topic subscription) filtering message by user id.
To illustrate, something like this:
/ **
* I donĀ“t know how to get a instance of Publisher<Balance>
* It should be a consumer from a kafka topic
*/
fun balance(myStream: Publisher<Balance>, userId: String): Publisher<Balance> {
return myStream.filter { it.userId == userId }
}

Maybe you need to write a Spring Cloud Stream consumer and then publish it to WebSocket programmatically. Something along the lines of
public Consumer<Flux<Balance>> myStream() {
//filter here and then publish to websocket.
}
Here is an example of a WebSocket sink implementation that you can potentially use as a guide, but this is not reactive.

Related

Can Reactive Kafka Receiver work with non-reactive Elasticsearch client?

Below is a sample code which uses reactor-kafka and reads data from a topic (with retry logic) which has records published via a non-reactive producer. Inside my doOnNext() consumer I am using non-reactive elasticsearch client which indexes the record in the index. So I have few questions that I am still unclear about :
I know that consumers and producers are independent decoupled systems, but is it recommended to have reactive producer as well whose consumers are reactive?
If I am using something that is non-reactive, in this case Elasticsearch client org.elasticsearch.client.RestClient, does the "reactiveness" of the code work? If it does or does not, how do I test it? (By "reactiveness", I mean non blocking IO part of it i.e. if I spawn three reactive-consumers and one is latent for some reason, the thread should be unblocked and used for other reactive consumer).
In general the question is, if I wrap some API with reactive clients should the API be reactive as well?
public Disposable consumeRecords() {
long maxAttempts = 3, duration = 10;
RetryBackoffSpec retrySpec = Retry.backoff(maxAttempts, Duration.ofSeconds(duration)).transientErrors(true);
Consumer<ReceiverRecord<K, V>> doOnNextConsumer = x -> {
// use non-reactive elastic search client and index record x
};
return KafkaReceiver.create(receiverOptions)
.receive()
.doOnNext(record -> {
try {
// calling the non-reactive consumer
doOnNextConsumer.accept(record);
} catch (Exception e) {
throw new ReceiverRecordException(record, e);
}
record.receiverOffset().acknowledge();
})
.doOnError(t -> log.error("Error occurred: ", t))
.retryWhen(retrySpec)
.onErrorContinue((e, record) -> {
ReceiverRecordException receiverRecordException = (ReceiverRecordException) e;
log.error("Retries exhausted for: " + receiverRecordException);
receiverRecordException.getRecord().receiverOffset().acknowledge();
})
.repeat()
.subscribe();
}
Got some understanding around it.
Reactive KafkaReceiver will internally call some API; if that API is blocking API then even if KafkaReceiver is "reactive" the non-blocking IO will not work and the receiver thread will be blocked because you are calling Blocking API / non-reactive API.
You can test this out by creating a simple server (which blocks calls for sometime / sleep) and calling that server from this receiver

How to tell RSocket to read data stream by Java 8 Stream which backed by Blocking queue

I have the following scenario whereby my program is using blocking queue to process message asynchronously. There are multiple RSocket clients who wish to receive this message. My design is such a way that when a message arrives in the blocking queue, the stream that binds to the Flux will emit. I have tried to implement this requirement as below, but the client doesn't receive any response. However, I could see Stream supplier getting triggered correctly.
Can someone pls help.
#MessageMapping("addListenerHook")
public Flux<QueryResult> addListenerHook(String clientName){
System.out.println("Adding Listener:"+clientName);
BlockingQueue<QueryResult> listenerQ = new LinkedBlockingQueue<>();
Datalistener.register(clientName,listenerQ);
return Flux.fromStream(
()-> Stream.generate(()->streamValue(listenerQ))).map(q->{
System.out.println("I got an event : "+q.getResult());
return q;
});
}
private QueryResult streamValue(BlockingQueue<QueryResult> inStream){
try{
return inStream.take();
}catch(Exception e){
return null;
}
}
This is tough to solve simply and cleanly because of the blocking API. I think this is why there aren't simple bridge APIs here to help you implement this. You should come up with a clean solution to turn the BlockingQueue into a Flux first. Then the spring-boot part becomes a non-event.
This is why the correct solution is probably involving a custom BlockingQueue implementation like ObservableQueue in https://www.nurkiewicz.com/2015/07/consuming-javautilconcurrentblockingque.html
A alternative approach is in How can I create reactor Flux from a blocking queue?
If you need to retain the LinkedBlockingQueue, a starting solution might be something like the following.
val f = flux<String> {
val listenerQ = LinkedBlockingQueue<QueryResult>()
Datalistener.register(clientName,listenerQ);
while (true) {
send(bq.take())
}
}.subscribeOn(Schedulers.elastic())
With an API like flux you should definitely avoid any side effects before the subscribe, so don't register your listener until inside the body of the method. But you will need to improve this example to handle cancellation, or however you cancel the listener and interrupt the thread doing the take.

Kotlin coroutine observer

I am developing GRPC server with Spring, Kotlin and Coroutines. My rpc service looks like this:
override fun authToken(request: AuthTokenRequest): Flow<AuthTokenResponse> {
return flow<AuthTokenResponse> {
while (true) {
delay(1000)
emit(AuthTokenResponse.newBuilder().setToken("Hello").build())
}
}
}
I wanna wait until somebody else change values in DB, connect to server and etc. in this case i need to emit new Response. Meanwhile clients still hangs on stream. What concept or design pattern I should use? Thanks for replies

Spring Cloud Contract with Spring AMQP

So I've been trying to use Spring Cloud Contract to test RabbitListener.
So far I have found out that by defining "triggeredBy" in contract, the generated test will call the method provided there and so we will need to provide the actual implementation of what that method do in the TestBase.
Another thing is "outputMessage", where we can verify whether the method call before have correctly resulting on some message body sent to certain exchange.
Source: documentation and sample
My question is, is there any way to produce the input message from the contract, instead of triggering own custom method?
Perhaps something similar like Spring Integration or Spring Cloud Stream example in the documentation:
Contract.make {
name("Book Success")
label("book_success")
input {
messageFrom 'input.exchange.and.maybe.route'
messageHeaders {
header('contentType': 'application/json')
header('otherMessageHeader': '1')
}
messageBody ([
bookData: someData
])
}
outputMessage {
sentTo 'output.exchange.and.maybe.route'
headers {
header('contentType': 'application/json')
header('otherMessageHeader': '2')
}
body([
bookResult: true
])
}
}
I couldn't find any examples in their sample project that show how to do this.
Having used spring cloud contract to document and test rest api services, if possible I would like to stay consistent by defining both the input and expected output in contract files for event based services.
Never mind, actually its already supported.
For unknown reason the documentation in "Stub Runner Spring AMQP" does not list the scenario like others previous sample.
Here is how I make it works:
Contract.make {
name("Amqp Contract")
label("amqp_contract")
input {
messageFrom 'my.exchange'
messageHeaders {
header('contentType': 'text/plain')
header('amqp_receivedRoutingKey' : 'my.routing.key')
}
messageBody(file('request.json'))
}
outputMessage {
sentTo 'your.exchange'
headers {
header('contentType': 'text/plain')
header('amqp_receivedRoutingKey' : 'your.routing.key')
}
body(file('response.json'))
}
}
This will create a test that will call your listener based on "my.exchange" and "my.routing.key" triggering the handler method.
It will then capture the message and routing key on your RabbitTemplate call to "your.exchange".
verify(this.rabbitTemplate, atLeastOnce()).send(eq(destination), routingKeyCaptor.capture(),
messageCaptor.capture(), any(CorrelationData.class));
Both message and routing key then will be asserted.

How to convert a vert.x ReactiveReadStream<Document> to ReactiveWriteStream<Buffer>

I have a straightforward use case. This is to make a rest call, query mongo and then return an arbitrarily large stream of data back to the client, all with reactive streams type back pressure management.
This was quite easy to achieve using Spring WebFlux and Reactor. I am now trying to achieve the same goal using vert.x, as a comparison of ease of implementation.
Having found the vert.x mongo client to be lacking any support for managing back pressure, I am now attempting to use the WebFlux mongo client and then pump the data back through the vert.x HttpResponse, as shown in the following code:
public class MyMongoVerticle extends AbstractVerticle {
ReactiveMongoOperations operations;
public void start() throws Exception {
final Router router = Router.router(vertx);
router.route().handler(BodyHandler.create());
router.get("/myUrl").handler(ctx -> {
// WebFlux mongo operations returns a ReactiveStreams compatible entity
Flux<Document> mongoStream = operations.findAll(Document.class, "myCollection");
ReactiveReadStream rrs = ReactiveReadStream.readStream();
// rrs is ReactiveStream streams subscriber
mongoStream.subscribe(rrs);
// Pump pumps the rrs (ReactiveReadStream) to the HttpServerResponse (ReactiveWriteStream)
Pump pump = Pump.pump(rrs, ctx.response());
pump.start();
});
vertx.createHttpServer().requestHandler(router::accept).listen(8777);
}
}
The issue I have encountered is that the HttpServerResponse implements ReactiveWriteStream<Buffer> so is expecting a Buffer rather than a stream of Document's. The result is a ClassCaseException.
The question I have is how can I convert this stream of Documents into a into a ReactiveWriteStream<Buffer>? There may be another better way to do this, so I'm open to other suggestions on how to achieve this.
Pump won't work for you, as it doesn't support transformations currently. You'll have to implement pump by yourself. Luckily, this shouldn't be too hard:
Flux<Document> mongoStream = operations.findAll(Document.class, "myCollection");
ReactiveReadStream<Document> rrs = ReactiveReadStream.readStream();
mongoStream.subscribe(rrs);
HttpServerResponse outStream = ctx.response();
// Changes start here
rrs.handler(d -> {
if (outStream.writeQueueFull()) {
outStream.drainHandler((s) -> {
rrs.resume();
});
rrs.pause();
}
else {
outStream.write(d.toJson());
}
}).endHandler(h -> {
outStream.end();
});
Note that I wouldn't expect this to be more effective than "native" WebFlux implementation.
Also, JSON in this example will be mangled, as I don't wrap it in proper JSON Array

Resources