Using websockets in spring, how do I send multiple updating messages? - spring

In the example given here https://spring.io/guides/gs/messaging-stomp-websocket/ , one receives data and then returns a second later with another data structure. How would I amend this to send multiple data elements if (for example) the server is processing data ?
#SendTo("/topic/greetings")
public Greeting greeting(HelloMessage message) throws Exception {
// I would like to send initial data here
Thread.sleep(1000); // simulated delay
return new Greeting("Hello, " + HtmlUtils.htmlEscape(message.getName()) + "!");
// I would like to send more data here (after a bit more server side processing)
}```

(in kotlin). Argh.. this does not work - it sends it to all clients
#Controller
public class GreetingController() {
var template: SimpMessagingTemplate? = null
#Autowired
constructor(_template: SimpMessagingTemplate) : this() {
this.template = _template
}
#MessageMapping("/hello")
fun greet(greeting: String) {
repeat(5) { i ->
this.template!!.convertAndSend("/topic/greetings", Greeting("Iteration $i"));
Thread.sleep(1000)
}
}
}

Related

Use Function to replyTo RPC request

I would like to use the java.util.Function approach to reply to an request send via RabbitTemplate.convertSendAndReceive. It's working fine with the RabbitListener but I can not get it working with the functional approach.
Client (working)
class Client(private val template RabbitTemplate) {
fun send() = template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message"
)
}
Server (approach 1, working)
class Server {
#RabbitListener(queues = ["rpc-queue"])
fun receiveRequest(message: String) = "Response Message"
#Bean
fun queue(): Queue {
return Queue("rpc-queue")
}
#Bean
fun exchange(): DirectExchange {
return DirectExchange("rpc-exchange")
}
#Bean
fun binding(exchange: DirectExchange, queue: Queue): Binding {
return BindingBuilder.bind(queue).to(exchange).with("rpc-routing-key")
}
}
Server (approach 2, not working) --> goal
class Server {
#Bean
fun receiveRequest(): Function<String, String> {
return Function { value: String ->
"Response Message"
}
}
}
With the config (approach 2)
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.binding.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.binding.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
With approach 2 the server receives. Unfortunately the response is lost. Does anybody know how to use the RPC pattern with the functional approach? I don't want to use the RabbitListener.
See documentation/tutorial.
Spring Cloud Stream is not really designed for RPC on the server side, so it won't handle this automatically like #RabbitListener does.
You can, however, achieve it by adding an output binding to route the reply to the default exchange and the replyTo header:
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.bindings.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.bindings.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
spring.cloud.stream.bindings.receiveRequest-out-0.destination=
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.routing-key-expression=headers['amqp_replyTo']
#logging.level.org.springframework.amqp=debug
#SpringBootApplication
public class So66586230Application {
public static void main(String[] args) {
SpringApplication.run(So66586230Application.class, args);
}
#Bean
Function<String, String> receiveRequest() {
return str -> {
return str.toUpperCase();
};
}
#Bean
public ApplicationRunner runner(RabbitTemplate template) {
return args -> {
System.out.println(new String((byte[]) template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message")));
};
}
}
PAYLOAD MESSAGE
Note that the reply will come as a byte[]; you can use a custom message converter on the template to convert to String.
EDIT
In reply to the third comment below.
The RabbitTemplate uses direct reply-to by default, so the reply address is not a real queue, it is a pseudo queue created by the binder and associated with a consumer in the template.
You can also configure the template to use temporary reply queues, but they are also routed to by the default exchange "".
You can, however, configure an external reply container, with the template as the listener.
You can then route back using whatever exchange and routing key you want.
Putting it all together:
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.bindings.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.bindings.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
spring.cloud.stream.bindings.receiveRequest-out-0.destination=reply-exchange
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.routing-key-expression='reply-routing-key'
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.declare-exchange=false
spring.rabbitmq.template.reply-timeout=10000
#logging.level.org.springframework.amqp=debug
public class So66586230Application {
public static void main(String[] args) {
SpringApplication.run(So66586230Application.class, args);
}
#Bean
Function<String, String> receiveRequest() {
return str -> {
return str.toUpperCase();
};
}
#Bean
SimpleMessageListenerContainer replyContainer(SimpleRabbitListenerContainerFactory factory,
RabbitTemplate template) {
template.setReplyAddress("reply-queue");
SimpleMessageListenerContainer container = factory.createListenerContainer();
container.setQueueNames("reply-queue");
container.setMessageListener(template);
return container;
}
#Bean
public ApplicationRunner runner(RabbitTemplate template, SimpleMessageListenerContainer replyContainer) {
return args -> {
System.out.println(new String((byte[]) template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message")));
};
}
}
IMPORTANT: if you have multiple instances of the client side, each needs its own reply queue.
In that case, the routing key must be the queue name and you should revert to the previous example to set the routing key expression (to get the queue name from the header).

Spring Cloud Stream - Consume Data on Demand manually?

Using Spring Data Stream, How can I start reading from a queue and stop reading on demand?
I want to so something like this:
#EnableBinding(Sink.class)
public class SomeConsumer {
#StreamListener(target = Sink.INPUT)
public void receiveMsg(Message<String> message)
{
logger.info(" received new message [" + message.toString() + "] ");
}
public static void startReceiving()
{
//How to implement this logic?
}
public static void stopReceiving()
{
//How to implement this logic?
}
}
It can't be done in a static method; autowire the BindingsEndpoint and use the changeState() method.
See my answer to this question.

Why isnt my sockets onsubscribe event getting used?

I am using java springboot with maven in order to get the spring boot starter socket package. My clients are using angular with stompjs and sockjs-client.
I am trying to set up a simple web socket application that allows for multiple rooms based on a roomId. When a client joins a room they should receive the last five messages sent in that room.
My Springboot app has three classes, the basic Application.java that I use to run the app, a web socket config class and a web socket controller:
#Controller
public class WebSocketController {
private final SimpMessagingTemplate template;
#Autowired
WebSocketController(SimpMessagingTemplate template){
this.template = template;
}
#MessageMapping("/meeting/{roomId}")
private void sendMessageTpPrivateRoom(
String message,
#DestinationVariable String roomId
) throws IOException {
System.out.println("message sent to: " + roomId);
this.template.convertAndSend("/meeting/" + roomId, message);
addToHistory(roomId, message);
}
#SubscribeMapping("/meeting/{roomId}")
public String chatInit(#DestinationVariable String roomId) {
System.out.println("Someone joined room: " + roomId);
return getLastFiveMessages(roomId);
}
}
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfiguration
extends AbstractWebSocketMessageBrokerConfigurer {
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/socket")
.setAllowedOrigins("*")
.withSockJS();
}
#Override
public void configureMessageBroker(MessageBrokerRegistry registry) {
registry.setApplicationDestinationPrefixes("/app")
.enableSimpleBroker("/meeting");
}
}
my clients are subscribing to the socket like so:
stompClient.subscribe(`app/meeting/${roomId}`, (message) => {
if (message.body) {
console.log(message.body);
messages += '<br>' + message.body;
}
});
and sending messages like so:
this.stompClient.send(`/app/meeting/${this.roomId}` , {}, message);
The message sending and handling is working great, when I set up three clients, two in room one, and one in room two, the room two messages are not being seen in room one and the room one messages are seen by both clients.
However the on subscribe event is not firing no matter what room I join. It is very necessary that when a client joins room one, they should receive some sort of history of that room. Any advice as to why my SubscribeMapping method is not being triggered when a client subscribes to the room?
The /meeting part will be implicitly added to URL you provide when subscribing. So your mapping will look like this:
#SubscribeMapping("/${roomId}")
public String chatInit(#DestinationVariable String roomId) {
System.out.println("Someone joined room: " + roomId);
return getLastFiveMessages(roomId);
}
Source: https://docs.spring.io/spring/docs/5.0.0.BUILD-SNAPSHOT/spring-framework-reference/html/websocket.html

Sending a Message with Spring Cloud Stream and RabbitMq changes ID

I'm using Spring Cloud Stream and RabbitMq to exchange Messages between different microservices.
Thats my setup to publish a message.
public interface OutputChannels {
static final String OUTPUT_CHANNEL = "outputChannel";
#Output
MessageChannel outputChannel();
}
.
#EnableBinding(OutputChannels.class)
#Log4j
public class OutputProducer {
#Autowired
private OutputChannels outputChannels;
public void createMessage(MyContent myContent) {
Message<MyContent> message = MessageBuilder
.withPayload(myContent)
.build();
outputChannels.outputChannel().send(message);
log.info("Sent message: " + message.getHeaders().getId() + myContent);
}
}
And the setup to receive the message
public interface InputChannels {
String INPUT_CHANNEL = "inputChannel";
#Input
SubscribableChannel inputChannel();
}
.
#EnableBinding(InputChannels.class)
#Log
public class InputConsumer {
#StreamListener(InputChannels.INPUT_CHANNEL)
public void receive(Message<MyContent> message) {
MyContent myContent = message.getPayload();
log.info("Received message: " + message.getHeaders().getId() + ", " + myContent);
}
}
I am able to successfully exchange messages with this setup. I would expect, that the IDs of the sent message and the received message are equal. But they are always different UUIDs.
Is there a way that the message keeps the same ID all the way from the producer, through the RabbitMq, to the consumer?
Spring Messaging messages are immutable; they get a new ID each time they are mutated.
You can use a custom header or IntegrationMessageHeaderAccessor.CORRELATION_ID to convey a constant value; in most use cases, the correlation id header is set by the application to the ID header at the start of a message's journey.

Logging process in Spring Integration without using xml files

My goal here is to log time of a process without using xml files for configurations. By reading other posts I came up with enriching headers in the integration flow. This kinda works, but not for the right purpose. For every new started process it gives me a startTime when the application is launched (i.e. a constant). See below:
#Bean
public IntegrationFlow processFileFlow() {
return IntegrationFlows
.from(FILE_CHANNEL_PROCESSING)
.transform(fileToStringTransformer())
.enrichHeaders(h -> h.header("startTime", String.valueOf(System.currentTimeMillis())))
.handle(FILE_PROCESSOR, "processFile").get();
}
My goal is to properly log the process without using xml files like I said above but I don't manage to do this. I found an example and tried a solution with ChannelInterceptorAdapter like this:
#Component(value = "integrationLoggingInterceptor")
public class IntegrationLoggingInterceptor extends ChannelInterceptorAdapter {
private static final Logger log = LoggerFactory.getLogger(IntegrationLoggingInterceptor.class);
#Override
public void postSend(Message<?> message, MessageChannel channel, boolean sent) {
log.debug("Post Send - Channel " + channel.getClass());
log.debug("Post Send - Headers: " + message.getHeaders() + " Payload: " + message.getPayload() + " Message sent?: " + sent);
}
#Override
public Message<?> postReceive(Message<?> message, MessageChannel channel) {
try {
log.debug("Post Receive - Channel " + channel.getClass());
log.debug("Post Receive - Headers: " + message.getHeaders() + " Payload: " + message.getPayload());
} catch (Exception ex) {
log.error("Error in post receive : ", ex);
}
return message;
}
}
But I receive no logs at all. Any ideas?
The .enrichHeaders(h -> h.header("startTime", String.valueOf(System.currentTimeMillis()))) falls to this:
public <V> HeaderEnricherSpec header(String name, V value, Boolean overwrite) {
AbstractHeaderValueMessageProcessor<V> headerValueMessageProcessor =
new StaticHeaderValueMessageProcessor<>(value);
headerValueMessageProcessor.setOverwrite(overwrite);
return header(name, headerValueMessageProcessor);
}
Pay attention to the StaticHeaderValueMessageProcessor. So, what you show is really a constant.
If you need a value calculated for each message to process, you should consider to use Function-based variant:
.enrichHeaders(h ->
h.headerFunction("startTime",
m -> String.valueOf(System.currentTimeMillis())))

Resources