Is there a concept of reverse controller for WebSockets? - websocket

I am implementing a WebSocket server in a small microservice and I wonder if conceptually there is a things such as a Reverse Controller?
I am using Spring Web Socket and the implementation defines that you can use a SimpMessagingTemplate to push anything to the Message broker. However it sounds messy that people can just inject a SimpMessagingTemplate and push data from everywhere in the code.
So to create some order I am thinking on defining some "Reverse Controllers" where all I have are calls to push data to a broker. I hope this will bring well defined points for the data going out.
As always, anything that you think is an original thought most likely is not, so I wonder if there are any already defined patterns to manage this case.

Ok I think I found the anwer. In the Spring Data documentation WebSocket outgoing messages to a broker are defined inside normal #Controller or #RestController annotated clases.
So methods annotated with #SendTo or #SendToUser, or with lines like:
this.simpMessagingTemplate.convertAndSend("/topic/mytopica", data);
are defined inside those clases. So it seems there is no concept of reverse Controller for outgoing messages to a broker, just use Controllers.

Related

What’s the difference between AbstractMessageSource and MessageProducerSupport in Spring Integration?

When developing inbound channel adapters, I couldn’t find any place that mentions the differences between AbstractMessageSource and MessageProducerSupport in Spring Integration. I’m asking this question in the context of reactive streams, so I’m actually looking at AbstractReactiveMessageSource, but I guess it doesn’t matter for my question. I also wonder whether MessageProducerSupport supports project reactor and doesn’t have an equivalent to AbstractReactiveMessageSource.
There is some documentation about these types of components: https://docs.spring.io/spring-integration/docs/current/reference/html/overview.html#finding-class-names-for-java-and-dsl-configuration
The inbound message flow side has its own components, which are divided into polling and listening behaviors.
So, the MessageProducerSupport is for those protocols which provide a listening callback for us. So, we can hook up into that, build message and produce it into a channel provided by the MessageProducer. So, it is self-eventing component which really has everything to listen to the source system and produce messages from the callback. This type of channel adapters called event-driven and samples of them are JMS, AMQP, HTTP, IMAP, Kinesis etc.
From here it is wrong to try to compare the MessageProducerSupport with an AbstractMessageSource because the are not relevant. The one you should look into is a SourcePollingChannelAdapter. Exactly this one is that kind of flow beginning endpoint which is similar to MessageProducerSupport. With only the problem that it is based on a periodic scheduled task to request for messages in the provided MessageSource. This type of component is for those protocols which don't provide listening callback, e.g. local file system, (S)FTP, JDBC, MongoDb, POP3, S3 etc.
You probable would expect something similar to MessageSource for the MessageProducer level, but there is not that kind of layer because everything single event-driven protocol has its own specifics, therefore we cannot extract some common abstraction like in case of polling protocols.
If your source system provides for you a reactive Publisher, you don't need to look into a SourcePollingChannelAdapter and MessageSource. You just need a MessageProducerSupport and call its subscribeToPublisher(Publisher<? extends Message<?>> publisher) from the start() implementation.
There is no need in the reactive implementation for the polling since Publisher is not pollable by itself it is event-driven. Although it has its own back-pressure specifics which is out of MessageProducerSupport scope.
There is also some explanation in this section of the doc: https://docs.spring.io/spring-integration/docs/current/reference/html/reactive-streams.html#source-polling-channel-adapter. And see a couple next paragraphs.

Get the stomp client used internally by Spring Broker Relay

I'm trying to setup a broker relay in Spring with RabbitMQ being the broker. Things work as intended when all events originate from my browser, however, sometimes I have events generated on the server side dynamically. I want to send these too to RabbitMQ to take advantage of things like durable topics or TTL for messages. As far as my understanding goes, using SimpleMessagingTemplate.convertAndSend() and convertAndSendToUser both end up sending the event to the browser instead of broker.
As of now, I'm trying to create a new stomp client to rabbitmq and send events through that. But I can't help feel it to be a bit hacky. Is there a way to get a hold on the stomp client used by Spring and forward my messages easily? Or am I missing something here?
Any help is appreciated, thanks!
Took a while but turns out you don't need to get a hold of the internal stomp client (it's actually an internal TcpClient from Reactor Netty though) or anything like that. Following are the steps you need to do when you want a little bit of customization:
Spring uses #EnableWebSocketMessageBroker to configure the broker or you can extend DelegatingWebSocketMessageBrokerConfiguration. I ended up extending it, it makes little difference though.
In configureMessageBroker(MessageBrokerRegistry registry), use the registry and enable stomp relay and the important part: for the registry in the same method, add ChannelInterceptors. You can get the stomp command and process it as required. The idea is identical to Spring Intercetpors. Add the headers needed inside that.
final StompHeaderAccessor headerAccessor = StompHeaderAccessor.wrap(message);
StompCommand command = headerAccessor.getCommand();
Then finally, recreate the message for sending.
MessageBuilder.createMessage(new byte[0], accessor.getMessageHeaders());
Lastly, you can test if things are actually going to RabbitMQ management console to observe if messages are actually being sent.

Development compromises in using Spring Cloud Stream

The case for event-driven microservices such as Spring Cloud Stream is their asynchronous nature, which I do agree it makes them more scalable
But I have an issue regarding how to code it in a way where I don't lose certain key features that I have access to using synchronous services
In a servlet-based MS, I make full use of servlet context variables and servlet-based Spring autowiring functions
For e.g., I leverage heavily on HTTP headers to carry metadata between microservices without having to impact the payload. But in Spring Cloud Stream using Kafka, Kafka doesn't support message headers of any kind! I lose that immediately if I use SCS. Putting them into the payload causes all sort of changes in my model classes if I define the attributes clearly. Yes, I can use a simple Hashmap to simulate the HTTP header object but it really seems like reinventing the wheel to me.
On the auto-wiring side: I maintain an audit log record per request, which I implement by declaring a request-scoped Hashmap bean and autowiring it into any methods in the Servlet's call stack that needs to append data to the audit log. Basically it's just a global variable to hold some data within a single request. But in SCS, again, I lose that cos bean scopes that leverage on servlets are not available.
So far, there seems to be a lot of trade-offs that I have to make just to make Spring Cloud Stream work for me.
I thought about an alternative approach where I use SCS just to create an entry point but the Source method would just get the event, use a Processor to construct a HTTP request and send the request along to a HTTP endpoint. But, why go through all that trouble then?
Hoping that some more experienced devs would be able to shed some light on how they leverage on SCS.
#feicipet Thanks for the detailed question. let me try to address some of your concerns in the order you have listed them:
+1
+1
I am not sure why you are referring to it as servlet-based instead of Spring-based? Those are features provided by Spring, but read on. . .
Spring Cloud Stream doesn't use Kafka, the end user does while Spring Cloud Stream provides Kafka binder allowing Spring Cloud Stream to integrate with Kafka. Further more, while Kafka indeed did not support headers prior to version 0.11, Spring Cloud Stream always supported and will continue support headers even with Kafka pre-0.11, embedding them in the Message and then extracting them in the consumer side into the proper Message headers completely transparent to the end user. In other words one would assume that Kafka did support headers by simply using Spring Cloud Stream. With Kafka 0.11+ headers are supported natively and we have adjusted to that with the same level of transparency.
So, you don't need to put anything in the payload. Just create an appropriate Message<payload, headers> and SCSt will take care of the rest regardless of the broker (Kafka, Rabbit, Foo etc.).
Yes you do simply due to the fact that as you eluded earlier SCSt promotes an asynchronous and stateless architecture. However, I do not agree that what you are trying to accomplish is un-accomplishable. Rather it is accomplishable the way you are describing, but there are other way to maintain context and I would be more then glad to discuss it as a separate topic.
I would not call them trade-offs, rather difference in the architecture, that has its benefits, but it is a not one-size-fits-all architecture and therefore its viability should be discussed within the context of a concrete use case.
+1. You don't have to separate it as Source and Processor. You can simply create a custom Source app with exposed REST endpoint and custom processing logic. However we are currently working on enhancements i the framework to ensure that you could do the same with the existing starter apps.
Obviously we have touched on many points here and some of them would probably need to be debated further, but I hope this clears up some of your concerns.
Cheers

Spring JPA with AMQP

We are thinking about having a micro service architecture in our business application. We think about having the communication via ampq. With this researches that i made for that, the question comes up: How to standardise the communication while programming?
For example: If you do some database requests you can use spring-data-jpa that creates for you the code to send those requests.
Isn't there a way to use something like that for AMQP requests if you need an object from another service, something like an AMQPRepositories to have this standardised way?
Does someone has some other ideas or made some experiences with that?

how do we configure multiple pollers to send messages to one transformer

How do i get multiple pollers of same type to route messages to a single transformer .I don't mind having the messages queued before the transformer
Well, I think you just want to reuse a message flow for all your <int-mail:imap-idle-channel-adapter>s. It's just enough to configure them all to the same channel.
Actually there is no difference with classical OOP design, when you inject the same service to different actions, like MVC controller, or JMS listeners.
But here we do exactly the same, but inject a MessageChannel to send results of those entry point to it and don't think what's going on underneath.
Please, read more books about Enterprise Integration Patterns

Resources