I have the following code.
#Incoming("my-topic")
void process(String someEvent) {
String someResponse = assuminglyRealFastReactiveClientCall();
}
The above code throws a blocking thread exception. Which is corrected with #Blocking.
#Incoming("my-topic")
#Blocking
void process(String someEvent) {
String someResponse = assuminglyRealFastReactiveClientCall();
}
If I switch String assuminglyRealFastReactiveClientCall() to Uni<String> assuminglyRealFastReactiveClientCall()
I'm guessing the consumer method has to switch to manual ack strategy and the message needs to be acked/nacked based on the result of the subscribe, so?
#Incoming("my-topic")
void process(Message<String> someEvent) {
assuminglyRealFastReactiveClientCall()
.subscribe().with(s -> {
System.out.println("Response: " + s);
event.ack();
}, t -> event.nack(t));
}
#Incoming("my-topic")
Uni<Void> process(Message<String> someEvent) {
return assuminglyRealFastReactiveClientCall()
.invoke(this::handleResponse)
.chain(response -> Uni.createFrom().completionStage(someEvent.ack()));
}
private void handleResponse(String response) {
// Do something with the response
}
The paragraph Consuming Messages in the Smallrye Reactive messaging documentation has many more examples.
Related
I have two endpoints:
#GET
#Produces(MediaType.TEXT_PLAIN)
#Path("/waitForEvent")
public Uni<Object> waitForEvent() {
return Uni.createFrom().emitter(em -> {
//wait for event from eventBus
// eventBus.consumer("test", msg -> {
// System.out.printf("receive event: %s\n", msg.body());
// em.complete(msg);
// });
}).ifNoItem().after(Duration.ofSeconds(5)).failWith(new RuntimeException("timeout"));
}
#GET
#Path("/send")
public void test() {
System.out.println("send event");
eventBus.send("test", "send test event");
}
The waitForEvent() should only complete if it receives the event from the eventBus. How can I achieve this using vertx and mutiny?
In general, we avoid that kind of pattern and use the request/reply mechanism from the event bus:
#GET
#Path("/send")
public Uni<String> test() {
return bus.<String>request("test", name)
.onItem().transform(Message::body)
.ifNoItem().after(Duration.ofSeconds(5)).failWith(new RuntimeException("timeout"));
}
When implementing with two endpoints (as in the question), it can become a bit more complicated as if you have multiple calls to the /waitForEvent endpoint, you need to be sure that every "consumer" get the message.
It is still possible, but would will need something like this:
#GET
#Produces(MediaType.TEXT_PLAIN)
#Path("/waitForEvent")
public Uni<String> waitForEvent() {
return Uni.createFrom().emitter(emitter -> {
MessageConsumer<String> consumer = bus.consumer("test");
consumer.handler(m -> {
emitter.complete(m.body());
consumer.unregisterAndForget();
})
.ifNoItem().after(Duration.ofSeconds(5)).failWith(new RuntimeException("timeout"));
}
#GET
#Path("/send")
public void test() {
bus.publish("test", "send test event");
}
Be sure to use the io.vertx.mutiny.core.eventbus.EventBus variant of the event bus.
I would like to use the java.util.Function approach to reply to an request send via RabbitTemplate.convertSendAndReceive. It's working fine with the RabbitListener but I can not get it working with the functional approach.
Client (working)
class Client(private val template RabbitTemplate) {
fun send() = template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message"
)
}
Server (approach 1, working)
class Server {
#RabbitListener(queues = ["rpc-queue"])
fun receiveRequest(message: String) = "Response Message"
#Bean
fun queue(): Queue {
return Queue("rpc-queue")
}
#Bean
fun exchange(): DirectExchange {
return DirectExchange("rpc-exchange")
}
#Bean
fun binding(exchange: DirectExchange, queue: Queue): Binding {
return BindingBuilder.bind(queue).to(exchange).with("rpc-routing-key")
}
}
Server (approach 2, not working) --> goal
class Server {
#Bean
fun receiveRequest(): Function<String, String> {
return Function { value: String ->
"Response Message"
}
}
}
With the config (approach 2)
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.binding.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.binding.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
With approach 2 the server receives. Unfortunately the response is lost. Does anybody know how to use the RPC pattern with the functional approach? I don't want to use the RabbitListener.
See documentation/tutorial.
Spring Cloud Stream is not really designed for RPC on the server side, so it won't handle this automatically like #RabbitListener does.
You can, however, achieve it by adding an output binding to route the reply to the default exchange and the replyTo header:
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.bindings.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.bindings.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
spring.cloud.stream.bindings.receiveRequest-out-0.destination=
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.routing-key-expression=headers['amqp_replyTo']
#logging.level.org.springframework.amqp=debug
#SpringBootApplication
public class So66586230Application {
public static void main(String[] args) {
SpringApplication.run(So66586230Application.class, args);
}
#Bean
Function<String, String> receiveRequest() {
return str -> {
return str.toUpperCase();
};
}
#Bean
public ApplicationRunner runner(RabbitTemplate template) {
return args -> {
System.out.println(new String((byte[]) template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message")));
};
}
}
PAYLOAD MESSAGE
Note that the reply will come as a byte[]; you can use a custom message converter on the template to convert to String.
EDIT
In reply to the third comment below.
The RabbitTemplate uses direct reply-to by default, so the reply address is not a real queue, it is a pseudo queue created by the binder and associated with a consumer in the template.
You can also configure the template to use temporary reply queues, but they are also routed to by the default exchange "".
You can, however, configure an external reply container, with the template as the listener.
You can then route back using whatever exchange and routing key you want.
Putting it all together:
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.bindings.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.bindings.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
spring.cloud.stream.bindings.receiveRequest-out-0.destination=reply-exchange
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.routing-key-expression='reply-routing-key'
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.declare-exchange=false
spring.rabbitmq.template.reply-timeout=10000
#logging.level.org.springframework.amqp=debug
public class So66586230Application {
public static void main(String[] args) {
SpringApplication.run(So66586230Application.class, args);
}
#Bean
Function<String, String> receiveRequest() {
return str -> {
return str.toUpperCase();
};
}
#Bean
SimpleMessageListenerContainer replyContainer(SimpleRabbitListenerContainerFactory factory,
RabbitTemplate template) {
template.setReplyAddress("reply-queue");
SimpleMessageListenerContainer container = factory.createListenerContainer();
container.setQueueNames("reply-queue");
container.setMessageListener(template);
return container;
}
#Bean
public ApplicationRunner runner(RabbitTemplate template, SimpleMessageListenerContainer replyContainer) {
return args -> {
System.out.println(new String((byte[]) template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message")));
};
}
}
IMPORTANT: if you have multiple instances of the client side, each needs its own reply queue.
In that case, the routing key must be the queue name and you should revert to the previous example to set the routing key expression (to get the queue name from the header).
#Bean
public WebSocketHandler webSocketHandler() {
TopicProcessor<String> messageProcessor = this.messageProcessor();
Flux<String> messages = messageProcessor.replay(0).autoConnect();
Flux<String> outputMessages = Flux.from(messages);
return (session) -> {
System.out.println(session);
session.receive().map(WebSocketMessage::getPayloadAsText).subscribe(messageProcessor::onNext, (e) -> {
e.printStackTrace();
});
return session.getHandshakeInfo().getPrincipal().flatMap((p) -> {
session.getAttributes().put("username", p.getName());
return session.send(outputMessages.filter((payload) -> this.filterUser(session, payload))
.map((payload) -> this.generateMessage(session, payload)));
}).switchIfEmpty(Mono.defer(() -> {
return Mono.error(new BadCredentialsException("Bad Credentials."));
})).then();
};
}
I am trying to build a online chating system with webflux,and have found a example through github.as a beginner in reactor development,I am confused about how does this code send a message to single user.
this is the way i think of in springmvc
put all the active websocketsession into map
check every message if the field username in message equals the username stored in session,use this session send msg
private static Map clients = new ConcurrentHashMap();
public void sendMessageTo(String message, String ToUserName) throws IOException {
for (WebSocket item : clients.values()) {
if (item.username.equals(ToUserName) ) {
item.session.sendText(message);
break;
}
}
}
can you explain how does the code in the webflux code above works?
i know all the messages are stored in the outputMessages and subcribed.
when a new message be emitted,how does it find the correct session ?
My guess is that the WebSocketHandler is an interface containing only one method handle WebSocketHandler
which in turn i believe makes it a FunctionalInterface that can be used as a lambda.
(session) -> { ... }
So when a session is established with a client, and the client sends a websocket event. The server will look for the WebSocketHandler and populate it with the session from the client that sent the event.
If you find this confusing you can just implement the interface.
class ExampleHandler implements WebSocketHandler {
#Override
public Mono<Void> handle(WebSocketSession session) {
Mono<Void> input = session.receive()
.doOnNext(message -> {
// ...
})
.concatMap(message -> {
// ...
})
.then();
Flux<String> source = ... ;
Mono<Void> output = session.send(source.map(session::textMessage));
return Mono.zip(input, output).then();
}
}
#Bean
public WebSocketHandler webSocketHandler() {
return new ExampleHandler();
}
I need to access a websocket-service which closes an open websocket-connection after 24h. How do I have to implement the reconnect with Spring-Boot 2 and Webflux?
This is what I have so far (taken from https://github.com/artembilan/webflux-websocket-demo):
#GetMapping(path = "/stream", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> getStreaming() throws URISyntaxException {
ReactorNettyWebSocketClient client = new ReactorNettyWebSocketClient();
EmitterProcessor<String> output = EmitterProcessor.create();
Mono<Void> sessionMono = client.execute(new URI("ws://localhost:8080/echo"),
session -> session.receive()
.timeout(Duration.ofSeconds(3))
.map(WebSocketMessage::getPayloadAsText)
.subscribeWith(output)
.then());
return output.doOnSubscribe(s -> sessionMono.subscribe());
}
As soon as the connection gets lost (3 seconds no input anymore), a TimeoutException is thrown. But how can I reconnect the socket?
There is no out-of-the-box solution, reconnection mechanism is not part of JSR 356 - Java API for WebSocket. But you can implement it on your own - a simple example with Spring events:
Step 1 - Create an event class.
public class ReconnectionEvent extends ApplicationEvent {
private String url;
public ReconnectionEvent(String url) {
super(url);
this.url = url;
}
public String getUrl() {
return url;
}
}
Step 2 - Provide a method for websocket connection. An example:
...
#Autowired
private ApplicationEventPublisher publisher;
...
public void connect(String url) {
ReactorNettyWebSocketClient client = new ReactorNettyWebSocketClient();
EmitterProcessor<String> output = EmitterProcessor.create();
Mono<Void> sessionMono = client.execute(URI.create(url),
session -> session.receive()
.map(WebSocketMessage::getPayloadAsText)
.log()
.subscribeWith(output)
.doOnTerminate(() -> publisher.publishEvent(new ReconnectEvent(url)))
.then());
output
.doOnSubscribe(s -> sessionMono.subscribe())
.subscribe();
}
Check doOnTerminate() method - when the Flux terminates, either by completing successfully or with an error, it emits a ReconnectEvent. If necessary, you can emit the reconnection event on other Flux's callbacks (for example only on doOnError()).
Step 3 - Provide a listener, that connects again on given url when a reconnection event occures.
#EventListener(ReconnectEvent.class)
public void onApplicationEvent(ReconnectEvent event) {
connect(event.getUrl());
}
I did something by using UnicastProcessor of reactor.
...
public abstract class AbstractWsReconnectClient {
private Logger ...
protected UnicastProcessor<AbstractWsReconnectClient> reconnectProcessor = UnicastProcessor.create();
protected AbstractWsReconnectClient(Duration reconnectDuration) {
reconnect(reconnectDuration);
}
public abstract Mono<Void> connect();
private void reconnect(Duration duration) {
reconnectProcessor.publish()
.autoConnect()
.delayElements(duration)
.flatMap(AbstractWsReconnectClient::connect)
.onErrorContinue(throwable -> true,
(throwable, o) -> {
if (throwable instanceof ConnectException) {
logger.warn(throwable.getMessage());
} else {
logger.error("unexpected error occur during websocket reconnect");
logger.error(throwable.getMessage());
}
})
.doOnTerminate(() -> logger.error("websocket reconnect processor terminate "))
.subscribe();
}
}
When the WebSocketClient is terminate, invoke UnicastProcessor.onNext
public Mono<Void> connect() {
WebSocketClient client = new ReactorNettyWebSocketClient();
logger.info("trying to connect to sso server {}", uri.toString());
return client.execute(uri, headers, ssoClientHandler)
.doOnTerminate(() -> {
logger.warn("sso server {} disconnect", uri.toString());
super.reconnectProcessor.onNext(this);
});
}
I am new to Spring Integration DSL. Currently, i am trying to add a delay
between message channels- "ordersChannel" and "bookItemsChannel". But , the flow continues as though there is no delay.
Any help appreciated.
Here is the code:
#Bean
public IntegrationFlow ordersFlow() {
return IntegrationFlows.from("ordersChannel")
.split(new AbstractMessageSplitter() {
#Override
protected Object splitMessage(Message<?> message) {
return ((Order)message.getPayload()).getOrderItems();
}
})
.delay("normalMessage", new Consumer<DelayerEndpointSpec>() {
public void accept(DelayerEndpointSpec spec) {
spec.id("delayChannel");
spec.defaultDelay(50000000);
System.out.println("Going to delay");
}
})
.channel("bookItemsChannel")
.get();
}
Seems for me that mixed the init phase when you see that System.out.println("Going to delay"); and the real runtime, when the delay happens for each incoming message.
We have some delay test-case in the DSL project, but I've just wrote this one to prove that the defaultDelay works well:
#Bean
public IntegrationFlow ordersFlow() {
return f -> f
.split()
.delay("normalMessage", (DelayerEndpointSpec e) -> e.defaultDelay(5000))
.channel(c -> c.queue("bookItemsChannel"));
}
...
#Autowired
#Qualifier("ordersFlow.input")
private MessageChannel ordersFlowInput;
#Autowired
#Qualifier("bookItemsChannel")
private PollableChannel bookItemsChannel;
#Test
public void ordersDelayTests() {
this.ordersFlowInput.send(new GenericMessage<>(new String[] {"foo", "bar", "baz"}));
StopWatch stopWatch = new StopWatch();
stopWatch.start();
Message<?> receive = this.bookItemsChannel.receive(10000);
assertNotNull(receive);
receive = this.bookItemsChannel.receive(10000);
assertNotNull(receive);
receive = this.bookItemsChannel.receive(10000);
assertNotNull(receive);
stopWatch.stop();
assertThat(stopWatch.getTotalTimeMillis(), greaterThanOrEqualTo(5000L));
}
As you see it is very close to your config, but it doesn't prove that we have something wrong around .delay().
So, it would be better to provide something similar to confirm an unexpected problem.