I am using Artemis 2.6.2 with STOMP only and the following constellation:
Broker:
No queues configured in broker.xml, everything is auto created.
Server:
SUBSCRIBE to destination TaskResponse without selector/filter
SEND to destination TaskRequest with header clientId = ID (the ID of the client what the server would request to)
Client 123:
SUBSCRIBE to destination TaskRequest with selector clientId = 123
SEND to destination TaskResponse with header clientId = 123
When I watch at the Artemis Console the following happens:
No server and no client is connected: No address or queue is present
Server connect: Artemis creates a multicast address TaskResponse and a multicast queue for this address with empty filter
Client 123 connect: Artemis creates a multicast address TaskRequest and a multicast queue for this address with filter clientId = 123
Message exchange: Messages are transfered from server to client and back to server as expected.
Client 123 disconnect: Artemis removes the multicast address TaskRequest and the coresponding multicast queue with filter clientId = 123
Server sends message to TaskRequest for client 123: According to STOMP client on server the message is sent successful. On the broker the message disappears.
Same behavior vice versa: Client 123 is connected and server is not: According to STOMP client on client 123 the message is sent successful. On the broker the message disappears.
My guess is that the message is discarded because there is no route to a subscriber. If I enable the option "send-to-dla-on-no-route" in address-settings section of broker.xml the message goes directly to dead letter queue.
Do you know a way to preserve the messages until the subscriber reconnects?
Appendix 1: STOMP Messages
I am using the Stomp.Net Library with SelectorsCore Example but reduced only to selector s1. The workflow is a bit other than what I wrote above.
Unfortunately I did not found an example to enable logging of STOMP messages into a file in Artemis. Therefore I recorded the packets with WireShark, exported as text and uploaded into Gist StompMessages.txt. You can see there the diffrent STOMP messages, e.g. search for CONNECT, SEND, etc.
Solution
The solution was to use the option anycastPrefix=/queue/ in the acceptor element in broker.xml to force the queues to type ANYCAST:
<acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;anycastPrefix=/queue/</acceptor>
What you are observing is the expected behavior. If you send a message to an address with no queues (or in STOMP terms - a destination with no subscribers) then the message will have nowhere to go and will be discarded. This is normal pub/sub semantics.
If you want to preserve the messages even if no subscriber is present you can either:
Use anycast (i.e. point-to-point) semantics rather than multicast. This is discussed in the Artemis documentation.
Use "durable" STOMP subscribers as discussed in the Artemis documentation. The caveat here is that the subscription will still need to be created before messages are sent and you will also need to make sure you remove the subscription when you are done with it or it may accumulate messages.
Related
We want to use spring websockets + STOMP + amazon MQ as a full featured message broker. We were trying to do benchmarking, to find out how many client websocket connections single tomcat node can handle. But it appears that we hit amazonMQ connection limit first. As per the aws documentation, amazonMQ has a limit of 1000 connections per node (as far as I understand we can ask support to increase the limit, but I doubt that it can be increased big time). So my questions is:
1) Am I correct in assuming that for every websocket connection from client to spring/tomcat server, a corresponding connection being opened from server to broker? Is this correct behavior or we're doning something wrong here/missing something?
2) What can be done here? I mean I don't think this is a good idea to create broker node per evry 1000 users..
According to https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/messaging/simp/stomp/StompBrokerRelayMessageHandler.html your are doing everything right, and it is documented behavior.
Quote from javadoc:
For each new CONNECT message, an independent TCP connection to the broker is opened and used exclusively for all messages from the client that originated the CONNECT message. Messages from the same client are identified through the session id message header. Reversely, when the STOMP broker sends messages back on the TCP connection, those messages are enriched with the session id of the client and sent back downstream through the MessageChannel provided to the constructor.
As for a fix, you can write your own message broker relay, with tcp connection pooling.
I have a regular cloud server set up, I have a mobile app talking to the server via HTTP requests. I also have a Wifi device that I need to send messages and I want to do that over MQTT. When some change happens on the mobile app, I want the cloud server to publish a topic via MQTT so that the wifi device can receive the message. Can a broker also be a client? Am I understanding it wrong?
I'm going to attempt an answer based on my understanding; sorry if I misunderstood your question.
The way I understand it, you will have three/four pieces of software:
HTTP Server / MQTT Broker (these two services could run in the
same application or in separate ones)
Mobile application (communicates over HTTP)
Wifi Device (communicates using MQTT protocol)
Scenario:
The Wifi device will open a connection to the MQTT Broker and subscribe to a well defined topic. You can use a subscription with a QoS of 1 if you cannot afford to lose the messages. Any messages published prior to adding the subscription will not be received by your client. It might also be useful to open an MQTT connection using a non-clean session if your wifi connection is unstable (again, if you don't want to lose any messages).
After a specific event, the mobile application which communicates with the HTTP server will send it information.
Upon reception of the information, the HTTP server will then send an MQTT message to the MQTT Broker on the predefined topic (a topic that will match the Wifi Device's subscription).
The MQTT broker will relay the message from the HTTP Server to the Wifi Device (and any other MQTT clients with a matching subscription).
I hope this clarifies, let me know if anything is unclear.
"Can a broker also be a client?" Not really, although I'm certain some specific brokers will publish messages to special subscriptions based on special events, it only acts as a broker. It receives messages from publishers and forwards messages to any client who has shown interest in that message using a subscription (the message could potentially be dropped by the broker if no subscriber (client) is interested in that message).
I am using the Bunny implementation of RabbitMQ messaging
I have a bash script that provisions a durable topic exchange, a number of durable queues and binds them.
I am using Bunny on both client and server side to send messages between them.
However, I find that on terminating either connection (client/server) that my exchange and queues dissapear. I would like to configure it so that even if the server side fails, the client can still push messages to the queue and they will be processed once the server is back online.
Is this possible with Bunny/RabbitMQ?
Is it possible to configure the topic to store a copy of just the last message and send this to new connections without knowing client identifiers or other info?
Update:
From the info provided by Shashi I found this two pages where they describe a use case similar to mine (applied over stock prices) by using retroactive consumer and a subscription recovery policy. How ever I'm not getting the desired behaviour. What I currently do is:
Include in the activemq the folowing lines in the policyEntry for topic=">"
<subscriptionRecoveryPolicy>
<fixedCountSubscriptionRecoveryPolicy maximumSize="1"/>
</subscriptionRecoveryPolicy>
Add to the URL used to connect to the brocker (using activemq-cpp) consumer.retroactive=true.
Set the consumer has durable. (But I strongly think this is not want since I only need the last one, but without it I didn't get any message when starting the consumer for the second time)
Start up the broker.
Start the consumer.
Send a message to the topic using the activemq web admin console. (I receive it in the consumer, as expected)
Stop consumer.
Send another message to the topic.
Start consumer. I receive the message, also as expected.
However, if the consumer receives a message, then it goes offline (stop process) and then I restart it, it doesn't get the last message back.
The goal is to whenever the consumer starts get the last message, no mater what (obviously, except when there weren't messages sent to the topic).
Any ideas on what I'm missing?
Background:
I have a device which publishes his data to a topic when ever its data changes. A variable number of consumer may be connected to this topic, from 0 to less than 10. There is only one publisher in the topic and always publish all of his data as a single message (little data, just a couple of fields of a sensor reading). The publication rate of this information is variable, not necessarily time based, when something changes a new updated message is sent to the broker.
The problem is that when a new consumer connects to the topic it has no data of the device readings until a new message is send to the topic by the device. This could be solve by creating an additional queue so new connections can subscribe to the topic and then request the device for the current reading through the queue (the device would consume the queue message which would be a request for data, and then response in the same queue).
But Since the messages send to the topic are always information complete I was wondering if is it possible to configure the topic to store a copy of just the last message and send this to new connections without know client identifiers or other info?
Current broker in use is ActiveMQ.
What you want is to have retroactive consumers and to set the lastImageSubscriptionRecoveryPolicy subscription recovery policy on the topic. Shashi is correct in saying that the following syntax for setting a consumer to be retroactive works only with Openwire
topic = new ActiveMQTopic("TEST.Topic?consumer.retroactive=true");
In your case, what you can do is to configure all consumers to be retroactive in broker config with alwaysRetroactive="true". I tested that this works even for the AMQP protocol (library qpid-jms-client) and I suspect it will work for all protocols.
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic="FOO.>" alwaysRetroactive="true">
<subscriptionRecoveryPolicy>
<lastImageSubscriptionRecoveryPolicy />
</subscriptionRecoveryPolicy>
</policyEntry>
The configuration example is taken from https://github.com/apache/activemq/blob/master/activemq-unit-tests/src/test/resources/org/apache/activemq/test/retroactive/activemq-message-query.xml
Messaging providers (WebSphere MQ for example) have a feature called Retained Publication. With this feature the last published message on a topic is retained by the messaging provider and delivered to a new consumer who comes in after a message has been published on a given topic.
Retained Publication may be supported by Active MQ in it's native interface. This link talks about consumer.retroactive which is available for OpenWire only.
A publisher will tell the messaging provider to retain a publication by setting a property on the message before publishing. Below is how it is done using WebSphere MQ.
// set as a retained publication
msg.setIntProperty(JmsConstants.JMS_IBM_RETAIN, JmsConstants.RETAIN_PUBLICATION)
One of our customers has a JMS based implementation in which there are queues for reading/writing messages. The JMS client needs to write to an outbound queue and the it will read the response from an inbound queue. The JMS client will be deployed across multiple sites and will talk to a single outbound queue for writing messages and will read from a inbound queue (one only) for the responses. Consider the scenario in which there are 100 unique outbound requests and then the consumer gets 100 different responses for the sent requests (assume the messages got delivered correctly). How do I ensure that the messages that the consumer is reading from the inbound queue is for the designated recipient? Do we have to write our own logic to map the request/response? or does JMS have any delivery mechanism based on connection id … etc so that message get delivered to correct requester. Thank you very much in advance, need your expert inputs to design the application correctly. The JMS provider I am using is Apache ActiveMQ.
Regards,
Sumeet C
It sounds like you need REQUEST/REPLY...
Request/Reply - Synchronous
A queue sender sends a REQUEST message, then in the same thread, receives a REPLY. The sending thread blocks until the receiver sends back a reply message, ensuring the reply is for the original request. It's a basic set up that uses temporary queues, REPLY_TO addressing, and JMSCorrelationID...
Apache ActiveMQ Request/Reply
EAI Patterns for JMS Request/Reply
Point-to-Point - Async
If the customer's JMS implementation provides distinct queues for sending requests and receiving replies, you can send messages asynchronously with a unique JMSCorrelationID, and provided the customer sends back a response with that same id, you can receive the response message in a different thread and correlate them with the original request based on the JMSCorrelationID. Technically-speaking, REQUEST/REPLY does the same thing except it blocks and uses temporary queues for sending response messages back to the requestor instead of explicitly-named queues.