I made a simple Jms project with 2 java files names are MessageSender.java,MessageConsumer.java.one for sending messages to Activemq:Queue and another for consuming messages from Activemq:Queue.Deployed this project in Apache Tomcat.following code was consumer code.
ActiveMQConnectionFactory connectionFactory=new ActiveMQConnectionFactory("admin","admin","tcp://localhost:61617?jms.prefetchPolicy.queuePrefetch=1");
Connection connection=connectionFactory.createConnection();
final Session session=connection.createSession(true, Session.CLIENT_ACKNOWLEDGE);
Queue queue=session.createQueue("ThermalMap");
javax.jms.MessageConsumer consumer=session.createConsumer(queue);
//anonymous class
MessageListener listener = new MessageListener() {
#Override
public void onMessage(Message msg) {
// My business code
}
};
Later If I want to change consumer code,I don't want to stop Tomcatbecause If I stop Tomcat entire jms project should not work. So clients can't able to sent messages to Activemq:Queue.So I don't want to follow this way.
I am thinking, If I stop consumers through Activemq console page.I don't need to stop Tomcat So clients can able to send messages normally.For this I check AMQ console page,I didn't seen any consumers.
Is it correct way to do this.
If it is correct way, How can I do this.
can anyone suggest me.
Thanks.
Call the .close() method on your MessageConsumer.
Related
I am trying to implement a WebSocket feature with STOMP in my SpringBoot application. So far, this is going quite alright, but I'm running into one issue.
Unsubscribing from a topic seems to always be done from the browser's side. However, using #DestinationVariable I can create a number of topics (e.g. with the path /{game_id}/chat), and I need a security feature on the server's side.
Because messages are authorized, I am able to check whether the logged user actually has access to {game_id}. If they don't, the subscription should end (not the WebSocket connection!). To do this, I autowired DefaultSubscriptionRegistry to delete the subscription from the list, but this method is apparently protected. I now find myself not knowing how to delete this subscription (which is managed by the simple broker Spring provides) from inside of Spring.
I guess another way to do this is by mocking an unsubscribe message from the browser and having the MessageHandler handle it. But that gives its own challenges, mainly obtaining the ApplicationContext of the simple broker (that I did not personally edit).
Has anyone faced this challenge before? Are there good workarounds/alternatives to unsubscribe from the server side?
Rossen has given an answer on GitHub that I believe will help with this.
Essentially, the approach is to register a ChannelInterceptor that creates a mock unsubscribe message:
#Override
public Message<?> beforeHandle(Message<?> message, MessageChannel channel, MessageHandler handler) {
StompHeaderAccessor headers =
StompHeaderAccessor.create(StompCommand.UNSUBSCRIBE);
// ... add headers
Message<?> unsubscribe = MessageBuilder
.withPayload(new byte[0]).setHeaders(headers).build();
messageHandler.handleMessage(unsubscribe);
return message;
}
I'm using #EnableJms and #JmsListener annotation to register a queue listener in my application, basing on this tutorial. I'm connecting to IBM MQ, getting connection factory using jndi. I have read about acknowledge modes etc. but still it's a new thing to me. And my problem is that the message is not being returned to a queue (the listener is never called again).
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory
= new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setSessionTransacted(true);
factory.setSessionAcknowledgeMode(Session.AUTO_ACKNOWLEDGE); //I have tried also CLIENT_ACKNOWLEDGE
return factory;
}
#JmsListener(containerFactory = "jmsListenerContainerFactory", destination = "myQueue")
#SendTo("secondQueue")
public String testListener(String message){
if(true) throw new NullPointerException();
else return message;
}
Any help would be much appreciated.
I would have also a second question. From what I understand if I would like to implement any operation on a database, only way to rollback a commit (if something went wrong after this) would be to create a transaction manager? If not, I would need to detect a duplicated message.
First set the acknowledgement mode to Session.CLIENT_ACKNOWLEDGE
and when receiving the messages, if it's processed correctly then just call message.acknowledge() method or else don't call.
It will automatically stay in the queue and you don't need to resend it.
You need to use
import javax.jms.Message
I created simple Spring Boot app and Docker container of IBM MQ to test your case.
I found good instructions in this tutorial: https://developer.ibm.com/tutorials/mq-jms-application-development-with-spring-boot/
And in your case this environment behaves as expected: endless cycle of receive message -> NullPointerException -> return message -> ...
Than I found feature of IBM MQ called "Backout Queues & Thresholds", you'll found the explanation in this blog post: https://community.ibm.com/community/user/imwuc/browse/blogs/blogviewer?BlogKey=28814801-083d-4c80-be5f-90aaaf81cdfb
Briefly, it is possible to restrict number of times message returned to queue after exception, and after this limit send message to another queue.
May be in your case this feature used on your destination queue.
I have a Spring Boot app (Jhipster) that uses STOMP over WebSockets to communicate information from the server to users.
I recently added an ActiveMQ server to handle scaling the app horizontally, with an Amazon auto-scaling group / load-balancer.
I make use the convertAndSendToUser() method, which works on single instances of the app to locate the authenticated users' "individual queue" so only they receive the message.
However, when I launch the app behind the load balancer, I am finding that messages are only being sent to the user if the event is generated on the server that their websocket-proxy connection (to the broker) is established on?
How do I ensure the message goes through ActiveMQ to whichever instance of the app that the user is actually "connected too" regardless of which instance receives, say an HTTP Request that executes the convertAndSendToUser() event?
For reference here is my StompBrokerRelayMessageHandler:
#Bean
public AbstractBrokerMessageHandler stompBrokerRelayMessageHandler() {
StompBrokerRelayMessageHandler handler = (StompBrokerRelayMessageHandler) super.stompBrokerRelayMessageHandler();
handler.setTcpClient(new Reactor2TcpClient<>(
new StompTcpFactory(orgProperties.getAws().getAmazonMq().getStompRelayHost(),
orgProperties.getAws().getAmazonMq().getStompRelayPort(), orgProperties.getAws().getAmazonMq
().getSsl())
));
return handler;
}
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableStompBrokerRelay("/queue", "/topic")
.setSystemLogin(orgProperties.getAws().getAmazonMq().getStompRelayHostUser())
.setSystemPasscode(orgProperties.getAws().getAmazonMq().getStompRelayHostPass())
.setClientLogin(orgProperties.getAws().getAmazonMq().getStompRelayHostUser())
.setClientPasscode(orgProperties.getAws().getAmazonMq().getStompRelayHostPass());
config.setApplicationDestinationPrefixes("/app");
}
I have found the name corresponding to the queue that is generated on ActiveMQ by examining the headers in the SessionSubscribeEvent, that is generated in the listener when a user subscribes to a user-queue, as simpSessionId.
#Override
#EventListener({SessionSubscribeEvent.class})
public void onSessionSubscribeEvent(SessionSubscribeEvent event) {
log.debug("Session Subscribe Event:" +
"{}", event.getMessage().getHeaders().toString());
}
Corresponding queues' can be found in ActiveMQ, in the format: {simpDestination}-user{simpSessionId}
Could I save the sessionId in a key-value pair and just push messages onto that topic channel?
I also found some possibilities of setting ActiveMQ specific STOMP properties in the CONNECT/SUBSCRIBE frame to create durable subscribers if I set these properties will Spring than understand the routing?
client-id & subcriptionName
Modifying the MessageBrokerReigstry config resolved the issue:
config.enableStompBrokerRelay("/queue", "/topic")
.setUserDestinationBroadcast("/topic/registry.broadcast")
Based on this paragraph in the documentation section 4.4.13:
In a multi-application server scenario a user destination may remain
unresolved because the user is connected to a different server. In
such cases you can configure a destination to broadcast unresolved
messages to so that other servers have a chance to try. This can be
done through the userDestinationBroadcast property of the
MessageBrokerRegistry in Java config and the
user-destination-broadcast attribute of the message-broker element in
XML
I did not see any documentation on "why" /topic/registry.broadcast was the correct "topic" destination, but I am finding various iterations of it:
websocket sessions sample doesn't cluster.. spring-session-1.2.2
What is MultiServerUserRegistry in spring websocket?
Spring websocket - sendToUser from a cluster does not work from backup server
In my Spring Boot application I have to implement an import service. Users can submit a bunch of JSON files and application will try to import the data from these files. Depending on the data amount at JSON files the single import process can take a 1 or 2 hours.
I do not want to block the users during the import process so I plan to accept the task for importing and notify user that this data is scheduled for processing. I'll put the data into the queue and a free queue-consumer on the other end will start the import process. Also, I need to have a possibility to monitor a jobs in the queue, terminate them if needed.
Right now I'm thinking to use Embedded Apache ActiveMQ in order to introduce message producer and consumer logic but before this I'd like to ask - from the architecture point of view - is it a good choice for the described task or it can be implemented with a more appropriate tools.. like for example plain Spring #Async and so on ?
It is possible to treat files concurrently with Camel like this
from("file://incoming?maxMessagesPerPoll=1&idempotent=true&moveFailed=failed&move=processed&readLock=none").threads(5).process()
Take a look at http://camel.apache.org/file2.html
But i think that it is better for your requirements to use a standalone ActiveMQ, a standalone service to move files to ActiveMQ and standalone consumer to be capable to kill or restart each one independently.
It is better to use ActiveMQ as you said and you can easily create a service to move messages to a queue with Camel like this :
CamelContext context = new DefaultCamelContext();
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory("vm://localhost?broker.persistent=true");
context.addComponent("test-jms", JmsComponent.jmsComponentAutoAcknowledge(connectionFactory));
context.addRoutes(new RouteBuilder() {
public void configure() {
// convertBodyTo to use TextMessage or maybe send them as file to the Queue from("file://testFolderPath").convertBodyTo(String.class).to("test-jms:queue:test.queue");
}
});
context.start();
Here some examples
http://www.programcreek.com/java-api-examples/index.php?api=org.apache.camel.component.jms.JmsComponent
https://skills421.wordpress.com/2014/02/08/sending-local-files-to-a-jms-queue/
https://github.com/apache/camel/blob/master/examples/camel-example-jms-file/src/main/java/org/apache/camel/example/jmstofile/CamelJmsToFileExample.java
https://github.com/apache/camel/tree/master/examples
To monitor and manage you can use jmx with VisualVM or Hawtio http://hawt.io/getstarted/index.html
http://camel.apache.org/camel-jmx.html
To consume you can use DefaultMessageListenerContainer with concurrent consumers on the queue and for this you need to change the prefetchPolicy on the ConnectionFactory of the DefaultMessageListenerContainer , Multithreaded JMS client ActiveMQ
I am trying to streaming time series data using Springframework SimpMessagingTemplate (default Stomp implementation) to broadcast messages to a topic that the SockJS client subscribed to. However, the messages is received out of order. The server is single thread and messages are sent in ascending order by their timestamps. The client somehow received the messages out of the order.
I am using the latest release version of both stompjs and springframework (4.1.6 release).
looks like there is a built in striped executor, so just enable it:
#Override
protected void configureMessageBroker(MessageBrokerRegistry registry) {
// ...
registry.setPreservePublishOrder(true);
}
https://docs.spring.io/spring/docs/current/spring-framework-reference/web.html#websocket-stomp-ordered-messages
Found the root cause of this issue. The messages were sending in "correct" order from the application implementation perspective (I.e, convertAndSend() are called in one thread or at least thread safe fashion"). However, Springframework web socket uses reactor-tcp implementation which will process the messages on clientOutboundChannel from the thread pool. Thus the messages can be written to the tcp socket in different order that they are arrived. When I configured the web socket to limit 1 thread for the clientOutboundChannel, the order is preserved.
This problem is not in the SocketJS but a limitation of current Spring web socket design.
It's Spring web socket design problem. To receive messages in valid order you have to set corePoolSize of websocket clients to 1.
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketMessageBrokerConfiguration extends AbstractWebSocketMessageBrokerConfigurer {
#Override
public void configureClientOutboundChannel(ChannelRegistration registration) {
registration.taskExecutor().corePoolSize(1);
}
#Override
public void configureClientInboundChannel(ChannelRegistration registration) {
registration.taskExecutor().corePoolSize(1);
}
}
UPDATE
Please see #Jason's answer. Spring 5.1 has a setPreservePublishOrder() to order the messages based on their client ID.
I experienced this issue as well. I don't like to limit my thread pool size to 1 for this will cause an overhead on my application. Instead, I used a StripedExecutorService to process messages coming in and out of my application. This type of executor service guarantees ordered processing of messages for tasks that have same stripe. For me, I use WebSocket session ID as stripe. Register this executor via ChannelRegistration.taskExecutor() on your inbound, broker, and outbound channel and this will guarantee ordered messages. Choose your stripe wisely.