I have a spring boot application where I am using Stomp over websockets and using RabbitMQ as external message broker. Our application is using spring security as well. Same authenticated user can get logged inn using different browsers. User can subscribe at some destination to fetch live data feeds using Js client of SockJS. At backend we use simpleMessageTemplate.convertAndSentToUser method to push live feeds to particular authenticated user.
Issue we are facing is that when user login using browser A and subscribe to a destination, A Unique queue like data-feeds-user123 is created at RabbitMq with auto-delete policy, and same user login from different browser B but does not subscribe to destination. Now when backend publishes live data feed using convertAndSendToUser method, A strange behaviour happens A new queue is created data-feeds-321 and messages are published at both queues. Although queue data-feed-321 does not have any consumer and it does not gets deleted when user gets disconnected/close session. Such queues seems useless in such scenario and does not get purged automatically.
Can anyone help me out on this, I don’t want to broadcast the messages unless user subscribed from JS client.
Related
I am implementing spring boot stomp message broker socket to interact with webclient. i need to send sms to a specific user by username at some application point,means the message will be trigger from server to client. client will subscribe to a topic/queue. i heard #SendtoUser send sms to the perticular user, but here in my case user is just subscribing a topic, then from backend i need to send sms time to time to specific user. user will not send any sms to server.
its just push based sms.
messagingTemplate.convertAndSendToUser(sessionId,"/queue/something", payload,
headerAccessor.getMessageHeaders());
but here from where will i get the session id for the targeted user. here user is just subscribing the topic once.
You can find answer to a similar question (with project exemple) here :
Spring websocket send to specific people
The fact that user is subscribing once is not a problem. One the connection is established, the server can send has much message as needed.
how is that possible that a REST Microservice can communicate with another Microservice which is a hybrid, which means he can communicate with REST and with a Message Queue. For Example an API-Gateway. For the outside world, he is able to communicate with an App, Mobilephone via REST but the communication from the backend is via message queue.
Use case:
My homepage wants to get a Vehicle from the database. He asks the API-Gateway via a GET-Request. The API-Gateway takes the GET-request and publishes it into the message queue. The other Microservice takes the message and publishes the result. Then the API-gateway consumed the result and send it back as a response.
How can I implement it? Am I using Spring boot with Apache Kafka? Do I need to implement an asynchronous communication?
(Sorry its german)
There are some approaches to solve this situation.
You might create topics for each client request and wait for the reply on the other side, e.g, DriverService would read the request message, fetch all your data and publish it to your client request topic. As soon as you consume the response message, you destroy that topic.
BUT 'temporary' topics might take too long to be delete(if no configuration avoids that, such as delete.topic.enable property) in a request-response interaction, and you need to monitor possible topics overgrowth.
Websocket is another possible solution. Your client would start listening to a specific topic, previously agreed with your server, then in a specific timeout you would wait for the response, when your DriverService would publish to that specific socket channel.
Spring Boot offers you great starters for Kafka and Websockets. If you are expecting a large amount of transactions, I would go with a mixed strategy, using Kafka to help my backend scale and process all transactions, then would respond to client via Websocket.
Currently I'm building an application in a micro service architecture.
The first application is an API that does the user authentication, receive requests to initiate/keep a realtime connection with the user (via Socket.io or SockJS) and the system store the socket id into the User object.
The second application is a WORKER doing some stuff and sometime he has to send realtime data to the user.
The question is: How should the second application (the WORKER) send realtime data to the user?
Should the WORKER send a message to the API then the API forward this message to the user?
Or the WORKER can directly send the message to the user?
Thank you
In a perfect world example, the service that are responsible to send "publish" a real time push notifications should be separated from other services. Since the micro service is a set of narrowly related methods, and there is no relation between the authentication "user" service, and the realtime push notification service. And to a deep break down, the authentication actually is a separate service, this only FYI, There might be a reason you did this way.
How the service would communicate? There is actually many ways how to implement the internal communication between the services, MQ solution, which could add more technology to your stack, like Rabbit MQ, Beanstalk, Gearman, etc...
And also you can do the communication on top of HTTP protocal, but you need to consider that the HTTP call will add more cost.
The perfect solution is that each service will have to interfaces to execute on their behalf, HTTP interface and an MQ interface (console)
We are currently implementing a simple chat app that allows users to create conversations and exchange messages.
Our basic setup involves AngularJS on the front-end and SignalR hub on the back end. It works like this:
Client app opens a Websockets connection to our real-time service (based on SignalR) and subscribes to chat updates
User starts sending messages. For each new message, client app calls HTTP API to send it
The API stores the message in the database and notifies our real-time service that there is a new message
Real-time service pushes the message via Websockets to subscribed Clients
However, we noticed that opening up so many HTTP connections for each new message may not be a good idea, so we were wondering if Websockets should be used to both send and receive messages?
The new setup would look like this:
Client app opens a Websockets connection with real-time service
User starts sending messages. Client app pushes the messages to real-time service using Websockets
Real-time service picks up the message, notifies our persistence service it needs to be stored, then delivers the message to other subscribed Clients
Persistence service stores the message
Which of these options is more typical when setting up an efficient and performant chat system? Thanks!
You don't need a different http or Web API to persist message. Persist it in the hub method that is broadcasting the message. You can use async methods in the hub, create async tasks to save the message.
Using a different persistence API then calling signalr to broadcase isn't efficient, and why dublicate all the efforts?
I am currently assigned the task of writing an application which will use JMS API to communicate using Apache qpid as the JMS provider.
so basically my application will have multiple instances of a server running.Each server will serve a set of unique desks.so each instance will only have data for the desks it is serving.
There will also be multiple instances of clients each configured by desk again.
Now when the client starts up, it will request the data for the desk it is serving to the servers.The request should only go to the server that has that desk data loaded and the response should only go back to the client who requested the data for that desk.
I am thinking of using queues for this.i am not sure if i should create only one request queue which will be used by all the servers or i should create seperate queues for each server.
For response ,I am planning to use temporary queues.
Not that the request from the client to the server is not very often.Say each client may send around 50 requests a day.
can someone please tell me if this is a good design?