I have two endpoints:
#GET
#Produces(MediaType.TEXT_PLAIN)
#Path("/waitForEvent")
public Uni<Object> waitForEvent() {
return Uni.createFrom().emitter(em -> {
//wait for event from eventBus
// eventBus.consumer("test", msg -> {
// System.out.printf("receive event: %s\n", msg.body());
// em.complete(msg);
// });
}).ifNoItem().after(Duration.ofSeconds(5)).failWith(new RuntimeException("timeout"));
}
#GET
#Path("/send")
public void test() {
System.out.println("send event");
eventBus.send("test", "send test event");
}
The waitForEvent() should only complete if it receives the event from the eventBus. How can I achieve this using vertx and mutiny?
In general, we avoid that kind of pattern and use the request/reply mechanism from the event bus:
#GET
#Path("/send")
public Uni<String> test() {
return bus.<String>request("test", name)
.onItem().transform(Message::body)
.ifNoItem().after(Duration.ofSeconds(5)).failWith(new RuntimeException("timeout"));
}
When implementing with two endpoints (as in the question), it can become a bit more complicated as if you have multiple calls to the /waitForEvent endpoint, you need to be sure that every "consumer" get the message.
It is still possible, but would will need something like this:
#GET
#Produces(MediaType.TEXT_PLAIN)
#Path("/waitForEvent")
public Uni<String> waitForEvent() {
return Uni.createFrom().emitter(emitter -> {
MessageConsumer<String> consumer = bus.consumer("test");
consumer.handler(m -> {
emitter.complete(m.body());
consumer.unregisterAndForget();
})
.ifNoItem().after(Duration.ofSeconds(5)).failWith(new RuntimeException("timeout"));
}
#GET
#Path("/send")
public void test() {
bus.publish("test", "send test event");
}
Be sure to use the io.vertx.mutiny.core.eventbus.EventBus variant of the event bus.
Related
Standard java aerospike client's methods have overloads allowing to provide EventLoop as an argument. When running in vert.x that client is not aware of context-bounded event loop and just fallbacks to if (eventLoop == null) { eventLoop = cluster.eventLoops.next(); } which could(and likely does) causes context switching/level of concurrency which in turn affects performance (it's still in theory, but I want to prove it), because there is no guarantee that aerospike requests will run on the same event loop as coming http request according to Vert.x Multi Reactor pattern. Open source aerospike clients like vertx-aerospike-client also have such a disadvantage. Using vert.x there is no way(at least I'm not aware of) to retrieve context-bounded event loop and pass it to aerospike client.
Vert.x has method to retrieve Context Vertx.currentContext() but retrieving EventLoop is not available.
Any ideas?
Finally I've built this:
public class ContextEventLoop {
private final NettyEventLoops eventLoops;
public ContextEventLoop(final NettyEventLoops eventLoops) {
this.eventLoops = Objects.requireNonNull(eventLoops, "eventLoops");
}
public EventLoop resolve() {
final ContextInternal ctx = ContextInternal.current();
final EventLoop eventLoop;
if (ctx != null
&& ctx.isEventLoopContext()
&& (eventLoop = eventLoops.get(ctx.nettyEventLoop())) != null) {
return eventLoop;
}
return eventLoops.next();
}
}
#NotNull
public EventLoops wrap(final EventLoops fallback,
final Supplier<#NotNull EventLoop> next) {
return new EventLoops() {
#Override
public EventLoop[] getArray() {
return fallback.getArray();
}
#Override
public int getSize() {
return fallback.getSize();
}
#Override
public EventLoop get(int index) {
return fallback.get(index);
}
#Override
public EventLoop next() {
return next.get();
}
#Override
public void close() {
fallback.close();
}
};
}
I have the following code.
#Incoming("my-topic")
void process(String someEvent) {
String someResponse = assuminglyRealFastReactiveClientCall();
}
The above code throws a blocking thread exception. Which is corrected with #Blocking.
#Incoming("my-topic")
#Blocking
void process(String someEvent) {
String someResponse = assuminglyRealFastReactiveClientCall();
}
If I switch String assuminglyRealFastReactiveClientCall() to Uni<String> assuminglyRealFastReactiveClientCall()
I'm guessing the consumer method has to switch to manual ack strategy and the message needs to be acked/nacked based on the result of the subscribe, so?
#Incoming("my-topic")
void process(Message<String> someEvent) {
assuminglyRealFastReactiveClientCall()
.subscribe().with(s -> {
System.out.println("Response: " + s);
event.ack();
}, t -> event.nack(t));
}
#Incoming("my-topic")
Uni<Void> process(Message<String> someEvent) {
return assuminglyRealFastReactiveClientCall()
.invoke(this::handleResponse)
.chain(response -> Uni.createFrom().completionStage(someEvent.ack()));
}
private void handleResponse(String response) {
// Do something with the response
}
The paragraph Consuming Messages in the Smallrye Reactive messaging documentation has many more examples.
I would like to use the java.util.Function approach to reply to an request send via RabbitTemplate.convertSendAndReceive. It's working fine with the RabbitListener but I can not get it working with the functional approach.
Client (working)
class Client(private val template RabbitTemplate) {
fun send() = template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message"
)
}
Server (approach 1, working)
class Server {
#RabbitListener(queues = ["rpc-queue"])
fun receiveRequest(message: String) = "Response Message"
#Bean
fun queue(): Queue {
return Queue("rpc-queue")
}
#Bean
fun exchange(): DirectExchange {
return DirectExchange("rpc-exchange")
}
#Bean
fun binding(exchange: DirectExchange, queue: Queue): Binding {
return BindingBuilder.bind(queue).to(exchange).with("rpc-routing-key")
}
}
Server (approach 2, not working) --> goal
class Server {
#Bean
fun receiveRequest(): Function<String, String> {
return Function { value: String ->
"Response Message"
}
}
}
With the config (approach 2)
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.binding.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.binding.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
With approach 2 the server receives. Unfortunately the response is lost. Does anybody know how to use the RPC pattern with the functional approach? I don't want to use the RabbitListener.
See documentation/tutorial.
Spring Cloud Stream is not really designed for RPC on the server side, so it won't handle this automatically like #RabbitListener does.
You can, however, achieve it by adding an output binding to route the reply to the default exchange and the replyTo header:
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.bindings.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.bindings.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
spring.cloud.stream.bindings.receiveRequest-out-0.destination=
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.routing-key-expression=headers['amqp_replyTo']
#logging.level.org.springframework.amqp=debug
#SpringBootApplication
public class So66586230Application {
public static void main(String[] args) {
SpringApplication.run(So66586230Application.class, args);
}
#Bean
Function<String, String> receiveRequest() {
return str -> {
return str.toUpperCase();
};
}
#Bean
public ApplicationRunner runner(RabbitTemplate template) {
return args -> {
System.out.println(new String((byte[]) template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message")));
};
}
}
PAYLOAD MESSAGE
Note that the reply will come as a byte[]; you can use a custom message converter on the template to convert to String.
EDIT
In reply to the third comment below.
The RabbitTemplate uses direct reply-to by default, so the reply address is not a real queue, it is a pseudo queue created by the binder and associated with a consumer in the template.
You can also configure the template to use temporary reply queues, but they are also routed to by the default exchange "".
You can, however, configure an external reply container, with the template as the listener.
You can then route back using whatever exchange and routing key you want.
Putting it all together:
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.bindings.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.bindings.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
spring.cloud.stream.bindings.receiveRequest-out-0.destination=reply-exchange
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.routing-key-expression='reply-routing-key'
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.declare-exchange=false
spring.rabbitmq.template.reply-timeout=10000
#logging.level.org.springframework.amqp=debug
public class So66586230Application {
public static void main(String[] args) {
SpringApplication.run(So66586230Application.class, args);
}
#Bean
Function<String, String> receiveRequest() {
return str -> {
return str.toUpperCase();
};
}
#Bean
SimpleMessageListenerContainer replyContainer(SimpleRabbitListenerContainerFactory factory,
RabbitTemplate template) {
template.setReplyAddress("reply-queue");
SimpleMessageListenerContainer container = factory.createListenerContainer();
container.setQueueNames("reply-queue");
container.setMessageListener(template);
return container;
}
#Bean
public ApplicationRunner runner(RabbitTemplate template, SimpleMessageListenerContainer replyContainer) {
return args -> {
System.out.println(new String((byte[]) template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message")));
};
}
}
IMPORTANT: if you have multiple instances of the client side, each needs its own reply queue.
In that case, the routing key must be the queue name and you should revert to the previous example to set the routing key expression (to get the queue name from the header).
Thanks for reading ahead of time. In my main method I have a PublishSubscribeChannel
#Bean(name = "feeSchedule")
public SubscribableChannel getMessageChannel() {
return new PublishSubscribeChannel();
}
In a service that does a long running process it creates a fee schedule that I inject the channel into
#Service
public class FeeScheduleCompareServiceImpl implements FeeScheduleCompareService {
#Autowired
MessageChannel outChannel;
public List<FeeScheduleUpdate> compareFeeSchedules(String oldStudyId) {
List<FeeScheduleUpdate> sortedResultList = longMethod(oldStudyId);
outChannel.send(MessageBuilder.withPayload(sortedResultList).build());
return sortedResultList;
}
}
Now this is the part I'm struggling with. I want to use completable future and get the payload of the event in the future A in another spring bean. I need future A to return the payload from the message. I think want to create a ServiceActivator to be the message end point but like I said, I need it to return the payload for future A.
#org.springframework.stereotype.Service
public class SFCCCompareServiceImpl implements SFCCCompareService {
#Autowired
private SubscribableChannel outChannel;
#Override
public List<SFCCCompareDTO> compareSFCC(String state, int service){
ArrayList<SFCCCompareDTO> returnList = new ArrayList<SFCCCompareDTO>();
CompletableFuture<List<FeeScheduleUpdate>> fa = CompletableFuture.supplyAsync( () ->
{ //block A WHAT GOES HERE?!?!
outChannel.subscribe()
}
);
CompletableFuture<List<StateFeeCodeClassification>> fb = CompletableFuture.supplyAsync( () ->
{
return this.stateFeeCodeClassificationRepository.findAll();
}
);
CompletableFuture<List<SFCCCompareDTO>> fc = fa.thenCombine(fb,(a,b) ->{
//block C
//get in this block when both A & B are complete
Object theList = b.stream().forEach(new Consumer<StateFeeCodeClassification>() {
#Override
public void accept(StateFeeCodeClassification stateFeeCodeClassification) {
a.stream().forEach(new Consumer<FeeScheduleUpdate>() {
#Override
public void accept(FeeScheduleUpdate feeScheduleUpdate) {
returnList new SFCCCompareDTO();
}
});
}
}).collect(Collectors.toList());
return theList;
});
fc.join();
return returnList;
}
}
Was thinking there would be a service activator like:
#MessageEndpoint
public class UpdatesHandler implements MessageHandler{
#ServiceActivator(requiresReply = "true")
public List<FeeScheduleUpdate> getUpdates(Message m){
return (List<FeeScheduleUpdate>) m.getPayload();
}
}
Your question isn't clear, but I'll try to help you with some info.
Spring Integration doesn't provide CompletableFuture support, but it does provide an async handling and replies.
See Asynchronous Gateway for more information. And also see Asynchronous Service Activator.
outChannel.subscribe() should come with the MessageHandler callback, by the way.
Background Story:
I am developing a GWT application, using the standard MVP design pattern, and also using RPC to get data from my custom data handling servlet (does a lot behind the scenes). Anyway, my goal is to create a very simple custom caching mechanism, that stores the data returned from the RPC callback in a static cache POJO. (The callback also sends a custom event using the SimpleEventBus to all registered handlers.) Then when I request the data again, I'll check the cache before doing the RPC server call again. (And also send a custom event using the EventBus).
The Problem:
When I send the event from the RPC callback, everything works fine. The problem is when I send the event outside the RPC callback when I just send the cached object. For some reason this event doesn't make it to my registered handler. Here is some code:
public void callServer(final Object source)
{
if(cachedResponse != null)
{
System.err.println("Getting Response from Cache for: "+ source.getClass().getName());
//Does this actually fire the event?
eventBus.fireEventFromSource(new ResponseEvent(cachedResponse),source);
}
else
{
System.err.println("Getting Response from Server for: "+ source.getClass().getName());
service.callServer(new AsyncCallback<String>(){
#Override
public void onFailure(Throwable caught) {
System.err.println("RPC Call Failed.");
}
#Override
public void onSuccess(String result) {
cachedResponse = result;
eventBus.fireEventFromSource(new ResponseEvent(cachedResponse),source);
}
});
}
}
Now I have two Activities, HelloActivity and GoodbyeActivity (taken from: GWT MVP code)
They also print out messages when the handler is called. Anyway, this is the output I get from the logs: (Not correct)
Getting Response from Cache for: com.hellomvp.client.activity.HelloActivity
Response in GoodbyeActivity from: com.hellomvp.client.activity.HelloActivity
Getting Response from Cache for: com.hellomvp.client.activity.GoodbyeActivity
Response in HelloActivity from: com.hellomvp.client.activity.GoodbyeActivity
What I expect to get is this:
Getting Response from Cache for: com.hellomvp.client.activity.HelloActivity
Response in HelloActivity from: com.hellomvp.client.activity.HelloActivity
Getting Response from Cache for: com.hellomvp.client.activity.GoodbyeActivity
Response in GoodbyeActivity from: com.hellomvp.client.activity.GoodbyeActivity
And I will get this expected output if I change the above code to the following: (This is the entire file this time...)
package com.hellomvp.client;
import com.google.gwt.core.client.GWT;
import com.google.gwt.event.shared.EventBus;
import com.google.gwt.user.client.rpc.AsyncCallback;
import com.hellomvp.events.ResponseEvent;
public class RequestManager {
private EventBus eventBus;
private String cachedResponse;
private HelloServiceAsync service = GWT.create(HelloService.class);
public RequestManager(EventBus eventBus)
{
this.eventBus = eventBus;
}
public void callServer(final Object source)
{
if(cachedResponse != null)
{
System.err.println("Getting Response from Cache for: "+ source.getClass().getName());
service.doNothing(new AsyncCallback<Void>(){
#Override
public void onFailure(Throwable caught) {
System.err.println("RPC Call Failed.");
}
#Override
public void onSuccess(Void result) {
eventBus.fireEventFromSource(new ResponseEvent(cachedResponse),source);
}
});
}
else
{
System.err.println("Getting Response from Server for: "+ source.getClass().getName());
service.callServer(new AsyncCallback<String>(){
#Override
public void onFailure(Throwable caught) {
System.err.println("RPC Call Failed.");
}
#Override
public void onSuccess(String result) {
cachedResponse = result;
eventBus.fireEventFromSource(new ResponseEvent(cachedResponse),source);
}
});
}
}
}
So the point it out, the only change is that I created a new RPC call that does nothing, and send the event in its callback, with the cached data instead, and it causes the application to work as expected.
So the Question:
What am I doing wrong? I don't understand why 'eventBus.fireEvent(...)' Needs to be in an RPC Callback to work properly. I'm thinking this is a threading issue, but I have searched Google in vain for anything that would help.
I have an entire Eclipse project that showcases this issue that I'm having, it can be found at: Eclipse Problem Project Example
Edit: Please note that using eventBus.fireEventFromSource(...) is only being used for debugging purposes, since in my actual GWT Application I have more than one registered Handler for the events. So how do you use EventBus properly?
If I understand your problem correctly you are expecting calls to SimpleEventBus#fireEventFromSource to be routed only to the source object. This is not the case - the event bus will always fire events to all registered handlers. In general the goal of using an EventBus is to decouple the sources of events from their handlers - basing functionality on the source of an event runs counter to this goal.
To get the behavior you want pass an AsyncCallback to your caching RPC client instead of trying to use the EventBus concept in a way other than intended. This has the added benefit of alerting the Activity in question when the RPC call fails:
public class RequestManager {
private String cachedResponse = null;
private HelloServiceAsync service = GWT.create(HelloService.class);
public void callServer(final AsyncCallback<String> callback) {
if (cachedResponse != null) {
callback.onSuccess(cachedResponse);
} else {
service.callServer(new AsyncCallback<String>(){
#Override
public void onFailure(Throwable caught) {
callback.onFailure(caught);
}
#Override
public void onSuccess(String result) {
cachedResponse = result;
callback.onSuccess(cachedResponse);
}
});
}
}
}
And in the Activity:
clientFactory.getRequestManager().callServer(new AsyncCallback<String>() {
#Override
public void onFailure(Throwable caught) {
// Handle failure.
}
#Override
public void onSuccess(String result) {
helloView.showResponse(result);
}
});