I'm using a Spring-Boot-WAR deployed in Wildfly 14 and have implemented a JmsListener which is connected to a Queue. The JmsListener has set the concurrency to 5 and when the Spring-App is starting standalone I see 5 parallel working listeners. But in combination with Wildfly 14, there is only 1 listener running.
In JEE I would annotate the MessageDrivenBean with #Pool and then can configure the max-pool-size for the given pool. But I think the Spring-Listener just connects to the default MDB-Pool which has a size of 1.
Is there a way to connect the JmsListener with a specific bean-instance-pool? Or is there any other way to define an individual max-pool-size for this JmsListener?
standalone.xml
<subsystem xmlns="urn:jboss:domain:ejb3:5.0">
...
<pools>
<bean-instance-pools>
...
<strict-max-pool name="individual-strict-max-pool" max-pool-size="5" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/>
</bean-instance-pools>
</pools>
JmsListener
#JmsListener(destination = JMS_MESSAGE_NAME, concurrency = "5")
public void receiveFromMessageQueue(Message msg) {
...
}
In a JEE application server, the JCA specification limits the number of JMS sessions on a JMS connection to one. In your Spring Boot deployment, you have the 5 concurrent consumers on the JmsListener. This is achieved by having 5 JMS sessions on the one JMS connection that is managed by the Spring JMS Listener Container.
If you instantiate your own connection factory (not using the Wildfly JCA connection factories), then you can have multiple sessions per connection.
Related
I need to connect from code running in WebSphere Liberty to an MDB in Apache TomEE Plume. I am using activemq-rar-5.16.3.
Here is the Java code:
public void notifyListeners(String caseId) {
logger.debug("+notifyListeners");
int timeToLive = 15 * 1000; // 15 seconds
try {
logger.debug("Creating context");
InitialContext ic = new InitialContext();
logger.debug("Got Initial context");
ConnectionFactory jmsFactory = (ConnectionFactory)ic.lookup("jndi/JMS_BASE_QCF");
logger.debug("Got Factory");
JMSContext context = jmsFactory.createContext();
logger.debug("Creating text message");
TextMessage msg = context.createTextMessage(caseId);
logger.debug("Sending text message");
context.createProducer().setTimeToLive(timeToLive).send(jmsSendQueue, msg);
logger.debug("Text message sent");
} catch (Throwable e) {
logger.error("Caught Exception sending ActiveMQ Message : " + e, e);
}
logger.debug("-notifyListeners");
}
No matter what I try, the code hangs at jmsFactory.createContext(). There's no exception. It just hangs.
I can see from the Apache TomEE logs that an ActiveMQ listener has been created on tcp://127.0.0.1:61616 and verified this with a netstat command.
I can't move to the later version of the rar because it relies upon a Java 11 JRE.
Does anyone have any ideas how I can debug this? Wireshark shows nothing, and changing the Liberty definition to point the ActiveMQ Connection Factory to 61615 changes nothing - so I don't think the createContext method is getting as far as contacting the ActiveMQ broker. It's hardly relevant, but this method runs in an asynchronous CDI event handler in Liberty. There is nothing untoward in the Liberty logs, and no FFDC events.
Some more details:
Liberty: product = WebSphere Application Server 21.0.0.1 (wlp-1.0.48.cl210120210113-1459)
Apache TomEE: Apache Tomcat (TomEE)/9.0.52 (8.0.8)
My server.xml (relevent bits):
<!-- language: xml -->
<featureManager>
<feature>ejbLite-3.2</feature>
<feature>jaxws-2.2</feature>
<feature>jndi-1.0</feature>
<feature>jpa-2.2</feature>
<feature>jpaContainer-2.2</feature>
<feature>jsp-2.3</feature>
<feature>localConnector-1.0</feature>
<feature>mdb-3.2</feature>
<feature>microProfile-3.3</feature>
<feature>monitor-1.0</feature>
<feature>wasJmsClient-2.0</feature>
<feature>wasJmsSecurity-1.0</feature>
<feature>wasJmsServer-1.0</feature>
<feature>wmqJmsClient-2.0</feature>
<feature>jms-2.0</feature>
</featureManager>
<!--============================================= -->
<!-- Liberty to TomEE JMS over ActiveMQ Config -->
<!--============================================= -->
<resourceAdapter id="activemq" location="C:\apps\liberty\ActiveMQRAR\activemq-rar-5.16.3.rar">
<properties.activemq ServerUrl="tcp://127.0.0.1:61616"/>
</resourceAdapter>
<jmsQueueConnectionFactory jndiName="jndi/JMS_BASE_QCF">
<properties.activemq serverUrl="tcp://127.0.0.1:61616"/>
</jmsQueueConnectionFactory>
<jmsQueue jndiName="jndi/worklistQueue">
<properties.activemq PhysicalName="jms/worklistQueue"/>
</jmsQueue>
<!--============================================= -->
<!-- Liberty to TomEE JMS over ActiveMQ Config end-->
<!--============================================= -->
My main concern here is the use of createContext() in your notifyListeners method. ActiveMQ "Classic" (i.e. 5.x) doesn't fully support JMS 2 so you can't use the JMSContext API with it. JMS 2 is backwards compatible with JMS 1.1 (which ActiveMQ "Classic" fully supports) so you can still integrate using the ActiveMQ "Classic" JCA RA. You just can't use any APIs which are specific to JMS 2 (e.g. createContext()).
I have setup an Artemis HA-Custer example locally on my computer to learn how it's basically working. Now I want to prepare it to be pushed in a kubernetes cluster. Therefore I want to change the way of the initial membership discovery for the broker nodes, so I can use it in cloud, too. I want to use JMS and JGroups with "jdbc_ping". Actually I am not sure, if I am doing it right, so maybe you can tell me if not.
So far the brokers have successfully put their infos in the db-table and are apparently connected. When I try the following connectionFactory from my java application, it starts without errors and connects with the brokers. But in some points I am not sure, if it acts correctly.
#Bean
public ConnectionFactory connectionFactory() {
TransportConfiguration transportConfiguration = new TransportConfiguration(NettyConnectorFactory.class.getName());
ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactoryWithHA(JMSFactoryType.CF, transportConfiguration);
return cf;
}
So the single point of question is now, how to setup the connectionFactory for the use of JGroups correctly.
UPDATE:
INFO 24528 --- [enerContainer-1] o.s.j.l.DefaultMessageListenerContainer : JMS message listener invoker needs to establish shared Connection
ERROR 24528 --- [enerContainer-1] o.s.j.l.DefaultMessageListenerContainer : Could not refresh JMS Connection for destination 'TestA' - retrying using FixedBackOff{interval=5000, currentAttempts=0, maxAttempts=unlimited}. Cause: Failed to create session factory; nested exception is ActiveMQInternalErrorException[errorType=INTERNAL_ERROR message=AMQ219004: Failed to initialise session factory]
The ActiveMQ Artemis documentation covers this:
Lastly, the jgroups scheme is supported which provides an alternative to the udp scheme for server discovery. The URL pattern is either jgroups://channelName?file=jgroups-xml-conf-filename where jgroups-xml-conf-filename refers to an XML file on the classpath that contains the JGroups configuration or it can be jgroups://channelName?properties=some-jgroups-properties. In both instance the channelName is the name given to the jgroups channel created.
In your code you can do something like this:
#Bean
public ConnectionFactory connectionFactory() {
return new ActiveMQConnectionFactory("jgroups://channelName?file=jgroups-xml-conf-filename");
}
In your case the client will need access to the same database that the broker's are using in order to use that information for discovery.
My environment: spring 4.1, JBoss EAP 6.4, IBM MQ 8.0:
Messages are not redelivered in the case where Listener throws RuntimeException.
I have the following in JmsConfig:
#Bean
DefaultMessageListenerContainer defaultMessageListenerContainer(QueueConnectionFactory connectionFactory, JndiDestinationResolver dr, MessageListener ml) {
DefaultMessageListenerContainer mlc = new DefaultMessageListenerContainer();
mlc.setConnectionFactory(connectionFactory);
mlc.setMessageListener(ml);
mlc.setDestinationName(jndiInQueue);
mlc.setDestinationResolver(dr);
mlc.setSessionTransacted(true);
mlc.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
return mlc;
}
If I use a JmsTransactionManager and pass it to the above method and use like so:
mlc.setTransactionManager(tm)
Following warnings are written to the log:
It is not valid to commit a non-transacted session, and the behavior is the same, no redelivery.
ConnectionFactory is obtained via JNDI, I wonder if sourcing the ConnectionFactory through jndi has something to do with this?
From the AbstractMessageListenerContainer Javadocs:
In order to consistently arrange for redelivery with any container variant, consider "CLIENT_ACKNOWLEDGE" mode or - preferably - setting "sessionTransacted" to "true" instead
There is a similar question on SO.
Flip your ack mode to Session.SESSION_TRANSACTED instead of CLIENT_ACKNOWLEDGE.
Client Ack mode doesn't work as most folks want it.. and is a common "gotcha" in JMS. It acknowledges current message AND all previous messages in the session. It is not per-message acknowledgement.
Edit:
Also check related post-- IBM MQ may require you to use the "XA" versions of the connection factory class.
ref: Websphere Liberty profile - transacted Websphere MQ connection factory
Out-of-the-box, Wildfly 10 configures a pooled connection factory as part of the JMS subsystem with two entries.
<pooled-connection-factory name="activemq-ra"
transaction="xa"
connectors="in-vm"
entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory"/>
One might inject a connection factory like so:
#Resource(mappedName = "java:jboss/DefaultJMSConnectionFactory")
private ConnectionFactory connectionFactory;
What is the difference between this and choosing the other entry; java:/JmsXA?
There is no difference it's an additionnal JNDI entry to suit JMS 2 spec default ConnectionFactory name : java:comp/DefaultJMSConnectionFactory
You should resolve it using this name.
Since JMS 2.0, a default JMS connection factory is accessible to EE
application under the JNDI name java:comp/DefaultJMSConnectionFactory.
WildFly messaging subsystem defines a pooled-connection-factory that
is used to provide this default connection factory. Any parameter
change on this pooled-connection-factory will be take into account by
any EE application looking the default JMS provider under the JNDI
name java:comp/DefaultJMSConnectionFactory.
See https://docs.jboss.org/author/display/WFLY9/Messaging+configuration
The other one is just a legacy identifier :
The JCA layer intercepts the calls to createConnection() and createSession() and provides a caching layer (amongst other things). So when you call createConnection() or createSession(), then, in most cases it's not really calling the actual JMS implementation to actually create a new JMS connection or JMS session, it's just returning one from it's own internal cache - in other words the JCA layer pools JMS connections and JMS sessions.
In JBoss application server, the "special" JMS connection factory which provides the JCA caching is usually available at java:/JmsXA in jndi.
See https://developer.jboss.org/wiki/ShouldICacheJMSConnectionsAndJMSSessions
I configured spring xml based interceptor, which sends a jms message to activemq queue on each invokation of some transactional method after it is commited. It's happening with the following xml code.
<jms:outbound-channel-adapter channel="filteredStakesChannel" destination="stakesQueue" delivery-persistent="true" explicit-qos-enabled="true" />
But if the activemq server is down i get connection refused exception, which is propagated and i don't want this to happen even if the jms delivery fails. Is this possible?
Should i use some error-channel?
The simplest solution is to make fileredStakesChannel an Executor channel and the send will run on a different thread.
http://static.springsource.org/spring-integration/reference/html/messaging-channels-section.html#executor-channel
http://static.springsource.org/spring-integration/reference/html/messaging-channels-section.html#channel-configuration-executorchannel
Use the <task/> namespace to define an executor to use.