I'm working with ActiveMQ Artemis 2.17 and Spring Boot 2.5.7.
I'm publishing messages on topic and queue and consuming it. All is done through JMS.
All queues (anycast or multicast) are durables. My topic (multicast address) has 2 durable queues in order to have 2 separate consumers.
For my topic, the 2 consumers use durable and shared subscriptions (JMS 2.0).
All processing is transactional, managed through Atomikos transaction manager (I need it for a 2 phases commit with the database).
I have a problem with the redelivery policy and DLQ. When I throw an exception during the processing of the message, the redelivery policy applies correctly for the queue (anycast queue) and a DLQ is created with the message. However, for the topic (multicast queue), the redelivery policy does not apply and the message is not sent into a DLQ.
Here is my ActiveMQ Artemis broker configuration:
<?xml version='1.0'?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>0.0.0.0</name>
<!-- Codec and key used to encode the passwords -->
<!-- TODO : set master-password configuration into the Java code -->
<password-codec>org.apache.activemq.artemis.utils.DefaultSensitiveStringCodec;key=UBNTd0dS9w6f8HDIyGW9
</password-codec>
<!-- Configure the persistence into a database (postgresql) -->
<persistence-enabled>true</persistence-enabled>
<store>
<database-store>
<bindings-table-name>BINDINGS_TABLE</bindings-table-name>
<message-table-name>MESSAGE_TABLE</message-table-name>
<page-store-table-name>PAGE_TABLE</page-store-table-name>
<large-message-table-name>LARGE_MESSAGES_TABLE</large-message-table-name>
<node-manager-store-table-name>NODE_MANAGER_TABLE</node-manager-store-table-name>
<jdbc-lock-renew-period>2000</jdbc-lock-renew-period>
<jdbc-lock-expiration>20000</jdbc-lock-expiration>
<jdbc-journal-sync-period>5</jdbc-journal-sync-period>
<!-- Configure a connection pool -->
<data-source-properties>
<data-source-property key="driverClassName" value="org.postgresql.Driver"/>
<data-source-property key="url" value="jdbc:postgresql://localhost/artemis"/>
<data-source-property key="username" value="postgres"/>
<data-source-property key="password" value="ENC(-3eddbe9664c85ec7ed63588b000486cb)"/>
<data-source-property key="poolPreparedStatements" value="true"/>
<data-source-property key="initialSize" value="2"/>
<data-source-property key="minIdle" value="1"/>
</data-source-properties>
</database-store>
</store>
<!-- Configure the addresses, queues and topics default behaviour -->
<!-- See: https://activemq.apache.org/components/artemis/documentation/latest/address-model.html -->
<address-settings>
<address-setting match="#">
<dead-letter-address>DLA</dead-letter-address>
<auto-create-dead-letter-resources>true</auto-create-dead-letter-resources>
<dead-letter-queue-prefix/>
<dead-letter-queue-suffix>.DLQ</dead-letter-queue-suffix>
<expiry-address>ExpiryQueue</expiry-address>
<auto-create-expiry-resources>false</auto-create-expiry-resources>
<expiry-queue-prefix/>
<expiry-queue-suffix>.EXP</expiry-queue-suffix>
<expiry-delay>-1</expiry-delay>
<max-delivery-attempts>5</max-delivery-attempts>
<redelivery-delay>250</redelivery-delay>
<redelivery-delay-multiplier>2.0</redelivery-delay-multiplier>
<redelivery-collision-avoidance-factor>0.5</redelivery-collision-avoidance-factor>
<max-redelivery-delay>10000</max-redelivery-delay>
<max-size-bytes>100000</max-size-bytes>
<page-size-bytes>20000</page-size-bytes>
<page-max-cache-size>5</page-max-cache-size>
<max-size-bytes-reject-threshold>-1</max-size-bytes-reject-threshold>
<address-full-policy>PAGE</address-full-policy>
<message-counter-history-day-limit>0</message-counter-history-day-limit>
<default-last-value-queue>false</default-last-value-queue>
<default-non-destructive>false</default-non-destructive>
<default-exclusive-queue>false</default-exclusive-queue>
<default-consumers-before-dispatch>0</default-consumers-before-dispatch>
<default-delay-before-dispatch>-1</default-delay-before-dispatch>
<redistribution-delay>0</redistribution-delay>
<send-to-dla-on-no-route>true</send-to-dla-on-no-route>
<slow-consumer-threshold>-1</slow-consumer-threshold>
<slow-consumer-policy>NOTIFY</slow-consumer-policy>
<slow-consumer-check-period>5</slow-consumer-check-period>
<!-- We disable the automatic creation of queue or topic -->
<auto-create-queues>false</auto-create-queues>
<auto-delete-queues>true</auto-delete-queues>
<auto-delete-created-queues>false</auto-delete-created-queues>
<auto-delete-queues-delay>30000</auto-delete-queues-delay>
<auto-delete-queues-message-count>0</auto-delete-queues-message-count>
<config-delete-queues>OFF</config-delete-queues>
<!-- We disable the automatic creation of address -->
<auto-create-addresses>false</auto-create-addresses>
<auto-delete-addresses>true</auto-delete-addresses>
<auto-delete-addresses-delay>30000</auto-delete-addresses-delay>
<config-delete-addresses>OFF</config-delete-addresses>
<management-browse-page-size>200</management-browse-page-size>
<default-purge-on-no-consumers>false</default-purge-on-no-consumers>
<default-max-consumers>-1</default-max-consumers>
<default-queue-routing-type>ANYCAST</default-queue-routing-type>
<default-address-routing-type>ANYCAST</default-address-routing-type>
<default-ring-size>-1</default-ring-size>
<retroactive-message-count>0</retroactive-message-count>
<enable-metrics>true</enable-metrics>
<!-- We automatically force group rebalance and a dispatch pause during group rebalance -->
<default-group-rebalance>true</default-group-rebalance>
<default-group-rebalance-pause-dispatch>true</default-group-rebalance-pause-dispatch>
<default-group-buckets>1024</default-group-buckets>
</address-setting>
</address-settings>
<!-- Define the protocols accepted -->
<!-- See: https://activemq.apache.org/components/artemis/documentation/latest/protocols-interoperability.html -->
<acceptors>
<!-- Acceptor for only CORE protocol -->
<!-- We enable the cache destination as recommended into the documentation. See: https://activemq.apache.org/components/artemis/documentation/latest/using-jms.html -->
<acceptor name="artemis">
tcp://0.0.0.0:61616?protocols=CORE,tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;useEpoll=true,cacheDestinations=true
</acceptor>
</acceptors>
<!-- Define how to connect to another broker -->
<!-- TODO : est-ce utile ? -->
<connectors>
<connector name="netty-connector">tcp://localhost:61616</connector>
<connector name="broker1-connector">tcp://localhost:61616</connector>
<connector name="broker2-connector">tcp://localhost:61617</connector>
</connectors>
<!-- Configure the High-Availability and broker cluster for the high-availability -->
<ha-policy>
<shared-store>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
</shared-store>
</ha-policy>
<cluster-connections>
<cluster-connection name="gerico-cluster">
<connector-ref>netty-connector</connector-ref>
<static-connectors>
<connector-ref>broker1-connector</connector-ref>
<connector-ref>broker2-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<!-- <cluster-user>cluster_user</cluster-user>-->
<!-- <cluster-password>cluster_user_password</cluster-password>-->
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>72000</page-sync-timeout>
<!-- the system will enter into page mode once you hit this limit.
This is an estimate in bytes of how much the messages are using in memory
The system will use half of the available memory (-Xmx) by default for the global-max-size.
You may specify a different value here if you need to customize it to your needs.
<global-max-size>100Mb</global-max-size>
-->
<!-- Security configuration -->
<security-enabled>false</security-enabled>
<!-- Addresses and queues configuration -->
<!-- !!! DON'T FORGET TO UPDATE 'slave-broker.xml' FILE !!! -->
<addresses>
<address name="topic.test_rde">
<multicast>
<queue name="rde_receiver_1">
<durable>true</durable>
</queue>
<queue name="rde_receiver_2">
<durable>true</durable>
</queue>
</multicast>
</address>
<address name="queue.test_rde">
<anycast>
<queue name="queue.test_rde">
<durable>true</durable>
</queue>
</anycast>
</address>
</addresses>
</core>
</configuration>
The JMS configuration into the Spring Boot is the following:
#Bean
public DynamicDestinationResolver destinationResolver() {
return new DynamicDestinationResolver() {
#Override
public Destination resolveDestinationName(Session session, String destinationName, boolean pubSubDomain) throws JMSException {
if (destinationName.startsWith("topic.")) {
pubSubDomain = true;
}
else
pubSubDomain =false;
return super.resolveDestinationName(session, destinationName, pubSubDomain);
}
};
}
#Bean
public JmsListenerContainerFactory<?> queueConnectionFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer,
JmsErrorHandler jmsErrorHandler) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setPubSubDomain(false);
factory.setSessionTransacted(true);
factory.setErrorHandler(jmsErrorHandler);
return factory;
}
#Bean
public JmsListenerContainerFactory<?> topicConnectionFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer,
JmsErrorHandler jmsErrorHandler) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
// This provides all boot's default to this factory, including the message converter
configurer.configure(factory, connectionFactory);
factory.setPubSubDomain(true);
factory.setSessionTransacted(true);
factory.setSubscriptionDurable(true);
factory.setSubscriptionShared(true);
factory.setErrorHandler(jmsErrorHandler);
return factory;
}
The messages publisher:
#GetMapping("api/send")
public void sendDataToJms() throws InterruptedException {
OrderRest currentService = this.applicationContext.getBean(OrderRest.class);
for (Long i = 0L; i < 1L; i++) {
currentService.sendTopicMessage(i);
currentService.sendQueueMessage(i);
Thread.sleep(200L);
}
}
#Transactional
public void sendTopicMessage(Long id) {
Order myMessage = new Order("--" + id.toString() + "--", new Date());
jmsTemplate.convertAndSend("topic.test_rde", myMessage);
}
#Transactional
public void sendQueueMessage(Long id) {
Order myMessage = new Order("--" + id.toString() + "--", new Date());
jmsTemplate.convertAndSend("queue.test_rde", myMessage);
}
And the listeners:
#Transactional
#JmsListener(destination = "topic.test_rde", containerFactory = "topicConnectionFactory", subscription = "rde_receiver_1")
public void receiveMessage_rde_1(#Payload Order order, #Headers MessageHeaders headers, Message message, Session session) {
log.info("---> Consumer1 - rde_receiver_1 - " + order.getContent());
throw new ValidationException("Voluntary exception", "entity", List.of(), List.of());
}
#Transactional
#JmsListener(destination = "queue.test_rde", containerFactory = "queueConnectionFactory")
public void receiveMessage_rde_queue(#Payload Order order, #Headers MessageHeaders headers, Message message, Session session) {
log.info("---> Consumer1 - rde_receiver_queue - " + order.getContent());
throw new ValidationException("Voluntary exception", "entity", List.of(), List.of());
}
Is it normal that the redelivery policy and the DLQ mechanismd do only apply for a queue (anycat queue)?
Is it possible to also apply it for topic (multicast queue) and shared durable subscriptions?
If not, how can I have a topic behaviour but with redelivery and DLQ mechanisms? Should I use the divert solution of ActiveMQ?
Thank a lot for your help.
Regards.
I have found the problem. It comes from Atomikos that does not support the JMS v2.0 but only JMS 1.1.
So, it's not possible to get a shared durable subscription on a topic that support at the same time the 2-PC and the redelivery policy.
Related
I have a test application (Spring Boot 2.7.8) that uses ActiveMQ Artemis 2.27.1 as a messaging system. I have a 6 node cluster split into 3 live/backup pairs. Load balanced using ON_DEMAND with a redistribution delay of 2000.
The application creates a connection factory specifying all 3 live nodes and creates a withHA connection factory.
I have a generator class that publishes messages to a single queue. There is a consumer of that queue which replicates this message to 3 different queues. I am aware of topics and wish to move there eventually, but I am modeling an existing solution which does this sort of thing now.
Testing shows that I publish a message and consume it, it publishes to the other 3 queues but only consumes from 2 of them despite all having listeners. Checking the queues after execution shows that it has sent messages to the queue. This is consistent over several runs, the same queue is never consumed whilst I am generating 'new' events.
If I disable the initial generation of new messages and just rerun, the missing queue is then drained by its listener.
This feels like when the connections are made, this queue has a publisher on one node and the consumer on another and the redistribution is not happening. Not sure how I can prove this or why, if the publishing node does not have consumers, it is not redistributing to the consumer.
Connection factory bean
#Bean
public ActiveMQConnectionFactory jmsConnectionFactory() throws Exception {
HashMap<String, Object> map1 = new HashMap<>();
map1.put("host", "192.168.0.10");
map1.put("port", "61616");
HashMap<String, Object> map2 = new HashMap<>();
map2.put("host", "192.168.0.11");
map2.put("port", "61617");
HashMap<String, Object> map3 = new HashMap<>();
map3.put(TransportConstants.HOST_PROP_NAME, "192.168.0.12");
map3.put(TransportConstants.PORT_PROP_NAME, "61618");
TransportConfiguration server1 = new TransportConfiguration(NettyConnectorFactory.class.getName(), map1);
TransportConfiguration server2 = new TransportConfiguration(NettyConnectorFactory.class.getName(), map2);
TransportConfiguration server3 = new TransportConfiguration(NettyConnectorFactory.class.getName(), map3);
ActiveMQConnectionFactory connectionFactory = ActiveMQJMSClient.createConnectionFactoryWithHA(JMSFactoryType.CF, server1, server2, server3);
ActiveMQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.QUEUE_CF, server1);
connectionFactory.setPassword(brokerPassword);
connectionFactory.setUser(brokerUsername);
return connectionFactory;
}
Listener factory bean
#Bean
public DefaultJmsListenerContainerFactory jmsQueueListenerContainerFactory() throws Exception {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(jmsConnectionFactory());
//factory.setConcurrency("4-10");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setSessionTransacted(true);
return factory;
}
This handler listens to the initial published queue and splits
#Slf4j
#Component
#RequiredArgsConstructor
public class TransactionManagerListener {
private final JmsTemplate jmsTemplate;
/**
*
* Handle the ItemStatsUpdate event
*
* #param data - Event details wrapper object
* #throws RuntimeException that triggers a retry for that item following the backoff rules in the retryable
*/
#JmsListener(destination = "NewItem", containerFactory = "jmsQueueListenerContainerFactory")
public void podA(Session session, Message message, String data) throws RuntimeException {
log.info("TML {}!", data);
sendItemOn(data);
}
private void sendItemOn(String data) {
jmsTemplate.convertAndSend("Stash", data);
jmsTemplate.convertAndSend("PCE", data);
jmsTemplate.convertAndSend("ACD", data);
}
}
Extract from broker.xml. All nodes are slightly different to hook up the differnt live servers and their backup
<connectors>
<connector name="live1-connector">tcp://192.168.0.10:61616</connector>
<connector name="live2-connector">tcp://192.168.0.11:61617</connector>
<connector name="live3-connector">tcp://192.168.0.12:61618</connector>
<connector name="back1-connector">tcp://192.168.0.13:61619</connector>
<connector name="back2-connector">tcp://192.168.0.10:61620</connector>
<connector name="back3-connector">tcp://192.168.0.11:61621</connector>
</connectors>
<cluster-user>my-cluster-user</cluster-user>
<cluster-password>my-cluster-password</cluster-password>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>live2-connector</connector-ref>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<static-connectors>
<connector-ref>live1-connector</connector-ref>
<connector-ref>live3-connector</connector-ref>
<connector-ref>back2-connector</connector-ref>
<!--
<connector-ref>back1-connector</connector-ref>
<connector-ref>back3-connector</connector-ref>
-->
</static-connectors>
</cluster-connection>
</cluster-connections>
<ha-policy>
<replication>
<master>
<group-name>gloucester</group-name>
<check-for-live-server>true</check-for-live-server>
</master>
</replication>
</ha-policy>
As you can see form the commented out concurrency setting I have tried to tweak the threads and consumers available in the listener factory but it made no difference.
This came down to network configuration. It is necessary to run the cluster on the local swarm network but provide ingress connectors for the JMS client.
<connectors>
<connector name="live1-connector">tcp://192.168.0.10:61616</connector>
<connector name="live2-connector">tcp://192.168.0.11:61617</connector>
<connector name="live3-connector">tcp://192.168.0.12:61618</connector>
<connector name="back1-connector">tcp://192.168.0.13:61619</connector>
<connector name="back2-connector">tcp://192.168.0.10:61620</connector>
<connector name="back3-connector">tcp://192.168.0.11:61621</connector>
<connector name="live1-connector">tcp://live1:61616</connector>
<connector name="live2-connector">tcp://live2:61616</connector>
<connector name="live3-connector">tcp://live3:61616</connector>
<connector name="back1-connector">tcp://back1:61616</connector>
<connector name="back2-connector">tcp://back2:61616</connector>
<connector name="back3-connector">tcp://back3:61616</connector>
</connectors>
<cluster-user>my-cluster-user</cluster-user>
<cluster-password>my-cluster-password</cluster-password>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>live3-connector</connector-ref>
<static-connectors>
<connector-ref>back3-connector</connector-ref>
<connector-ref>live1-connector</connector-ref>
<connector-ref>back1-connector</connector-ref>
<connector-ref>live2-connector</connector-ref>
<connector-ref>back2-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
I am new to Spring and Hibernate and I would like to create Restful Web services using these technologies.
I am getting 404 exception when calling the URL http://localhost:8080/RakeshMavenProject/book.
I have created controller,service,Dao classes and mentioned the configuration parameters in AppConfig.java
#Configuration
#EnableTransactionManagement
#ComponentScans(value = {#ComponentScan("com.example.ws.maven_web_service_project.service"),
#ComponentScan("com.example.ws.maven_web_service_project.dao")})
public class AppConfig {
#Bean
public LocalSessionFactoryBean sessionFactoryBean(){
LocalSessionFactoryBean sessionFactoryBean = new LocalSessionFactoryBean();
sessionFactoryBean.setDataSource(dataSource());
sessionFactoryBean.setPackagesToScan(new String[] { "com.example.ws.maven_web_service_project.model" });
sessionFactoryBean.setHibernateProperties(hibernateProperties());
return sessionFactoryBean;
}
#Bean
public DataSource dataSource(){
DriverManagerDataSource ds = new DriverManagerDataSource();
ds.setDriverClassName("com.mysql.jdbc.Driver");
ds.setUrl("jdbc:mysql://localhost:3306/hibernate_test");
ds.setUsername("root");
ds.setPassword("root");
return ds;
}
private Properties hibernateProperties(){
Properties props = new Properties();
props.put("hibernate.dialect", "org.hibernate.dialect.MySQLDialect");
props.put("hibernate.show_sql", "true");
props.put("hibernate.hbm2ddl.auto", "update");
// Setting C3P0 properties
props.put("hibernate.c3p0.min_size", 5);
props.put("hibernate.c3p0.max_size", 20);
props.put("hibernate.c3p0.acquire_increment",
1);
props.put("hibernate.c3p0.timeout", 1800);
props.put("hibernate.c3p0.max_statements", 150);
return props;
}
#Bean
#Autowired
public HibernateTransactionManager transactionManager(SessionFactory s){
HibernateTransactionManager txManager = new HibernateTransactionManager();
txManager.setSessionFactory(s);
return txManager;
}
}
The code of server.xml
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--><!-- Note: A "Server" is not itself a "Container", so you may not
define subcomponents such as "Valves" at this level.
Documentation at /docs/config/server.html
--><Server port="8005" shutdown="SHUTDOWN">
<Listener className="org.apache.catalina.startup.VersionLoggerListener"/>
<!-- Security listener. Documentation at /docs/config/listeners.html
<Listener className="org.apache.catalina.security.SecurityListener" />
-->
<!--APR library loader. Documentation at /docs/apr.html -->
<Listener SSLEngine="on" className="org.apache.catalina.core.AprLifecycleListener"/>
<!-- Prevent memory leaks due to use of particular java/javax APIs-->
<Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener"/>
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"/>
<Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener"/>
<!-- Global JNDI resources
Documentation at /docs/jndi-resources-howto.html
-->
<GlobalNamingResources>
<!-- Editable user database that can also be used by
UserDatabaseRealm to authenticate users
-->
<Resource auth="Container" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" name="UserDatabase" pathname="conf/tomcat-users.xml" type="org.apache.catalina.UserDatabase"/>
</GlobalNamingResources>
<!-- A "Service" is a collection of one or more "Connectors" that share
a single "Container" Note: A "Service" is not itself a "Container",
so you may not define subcomponents such as "Valves" at this level.
Documentation at /docs/config/service.html
-->
<Service name="Catalina">
<!--The connectors can use a shared executor, you can define one or more named thread pools-->
<!--
<Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
maxThreads="150" minSpareThreads="4"/>
-->
<!-- A "Connector" represents an endpoint by which requests are received
and responses are returned. Documentation at :
Java HTTP Connector: /docs/config/http.html
Java AJP Connector: /docs/config/ajp.html
APR (HTTP/AJP) Connector: /docs/apr.html
Define a non-SSL/TLS HTTP/1.1 Connector on port 8080
-->
<Connector connectionTimeout="20000" port="8080" protocol="HTTP/1.1" redirectPort="8443"/>
<!-- A "Connector" using the shared thread pool-->
<!--
<Connector executor="tomcatThreadPool"
port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
-->
<!-- Define a SSL/TLS HTTP/1.1 Connector on port 8443
This connector uses the NIO implementation. The default
SSLImplementation will depend on the presence of the APR/native
library and the useOpenSSL attribute of the
AprLifecycleListener.
Either JSSE or OpenSSL style configuration may be used regardless of
the SSLImplementation selected. JSSE style configuration is used below.
-->
<!--
<Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol"
maxThreads="150" SSLEnabled="true">
<SSLHostConfig>
<Certificate certificateKeystoreFile="conf/localhost-rsa.jks"
type="RSA" />
</SSLHostConfig>
</Connector>
-->
<!-- Define a SSL/TLS HTTP/1.1 Connector on port 8443 with HTTP/2
This connector uses the APR/native implementation which always uses
OpenSSL for TLS.
Either JSSE or OpenSSL style configuration may be used. OpenSSL style
configuration is used below.
-->
<!--
<Connector port="8443" protocol="org.apache.coyote.http11.Http11AprProtocol"
maxThreads="150" SSLEnabled="true" >
<UpgradeProtocol className="org.apache.coyote.http2.Http2Protocol" />
<SSLHostConfig>
<Certificate certificateKeyFile="conf/localhost-rsa-key.pem"
certificateFile="conf/localhost-rsa-cert.pem"
certificateChainFile="conf/localhost-rsa-chain.pem"
type="RSA" />
</SSLHostConfig>
</Connector>
-->
<!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443"/>
<!-- An Engine represents the entry point (within Catalina) that processes
every request. The Engine implementation for Tomcat stand alone
analyzes the HTTP headers included with the request, and passes them
on to the appropriate Host (virtual host).
Documentation at /docs/config/engine.html -->
<!-- You should set jvmRoute to support load-balancing via AJP ie :
<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">
-->
<Engine defaultHost="localhost" name="Catalina">
<!--For clustering, please take a look at documentation at:
/docs/cluster-howto.html (simple how to)
/docs/config/cluster.html (reference documentation) -->
<!--
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
-->
<!-- Use the LockOutRealm to prevent attempts to guess user passwords
via a brute-force attack -->
<Realm className="org.apache.catalina.realm.LockOutRealm">
<!-- This Realm uses the UserDatabase configured in the global JNDI
resources under the key "UserDatabase". Any edits
that are performed against this UserDatabase are immediately
available for use by the Realm. -->
<Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/>
</Realm>
<Host appBase="webapps" autoDeploy="true" name="localhost" unpackWARs="true">
<!-- SingleSignOn valve, share authentication between web applications
Documentation at: /docs/config/valve.html -->
<!--
<Valve className="org.apache.catalina.authenticator.SingleSignOn" />
-->
<!-- Access log processes all example.
Documentation at: /docs/config/valve.html
Note: The pattern used is equivalent to using pattern="common" -->
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" pattern="%h %l %u %t "%r" %s %b" prefix="localhost_access_log" suffix=".txt"/>
<Context docBase="RakeshMavenProject" path="/RakeshMavenProject" reloadable="true" source="org.eclipse.jst.jee.server:RakeshMavenProject"/></Host>
</Engine>
</Service>
</Server>
Update : I am mentioning the code of my controller class
#RestController
public class BookController {
#Autowired
private BookService bookService;
//#PostMapping("/book")
#RequestMapping(value="/book/", method= RequestMethod.POST,headers="Accept=application/json")
public #ResponseBody ResponseEntity<?> saveBook(#RequestBody Book book){
int bookId = bookService.saveBook(book);
return ResponseEntity.ok().body("Your Book with Id "+ bookId +" save successfully.");
}
// #GetMapping("/book/{id}")
#RequestMapping(value="/book/{id}", method= RequestMethod.GET,headers="Accept=application/json")
public #ResponseBody ResponseEntity<Book> getBook(#PathVariable("id") int bookId){
Book book = bookService.getBook(bookId);
return ResponseEntity.ok().body(book);
}
#RequestMapping(value="/book/", method= RequestMethod.GET,headers="Accept=application/json")
public #ResponseBody ResponseEntity<List<Book>> list() {
List<Book> books = bookService.listBooks();
return ResponseEntity.ok().body(books);
}
}
Please anyone help me to find the solution.
You need to specify mapping either at controller level #RestContoller('myController') or at each method level like
#RequestMapping(value="myController/book/", method= RequestMethod.POST,headers="Accept=application/json")
The your url will be http://localhost:8080/RakeshMavenProject/myController/book.
Your request mapping has a typo. try using
#RequestMapping(value="/book") or access your app using http://localhost:8080/RakeshMavenProject/book/
It should work fine if there are no further error.
I'm using Spring Boot and ActiveMQ. I want to send and receive messages from a topic. This is working well. My code looks like this:
#RunWith(SpringRunner.class)
#SpringBootTest(classes = {
JmsSpike.TestListener1.class,
JmsSpike.TestListener2.class,
JmsSpike.Config.class
})
#TestPropertySource(properties = {
"spring.activemq.broker-url: tcp://localhost:61616",
"spring.activemq.password: admin",
"spring.activemq.user: admin",
"spring.jms.pub-sub-domain: true", // queue vs. topic
})
#EnableJms
#EnableAutoConfiguration
public class JmsSpike {
#Autowired
private JmsTemplate jmsTemplate;
#Test
public void sendMessage() throws Exception {
sendMessageInThread();
Thread.sleep(10000);
}
private void sendMessageInThread() {
new Thread() {
public void run() {
jmsTemplate.convertAndSend("asx2ras", "I'm a test");
}
}.start();
}
#TestComponent
protected static class TestListener1 {
#JmsListener(destination = "asx2ras")
public void receiveMessage(String message) {
System.out.println("****************** 1 *******************");
System.out.println("Hey 1! I got a message: " + message);
System.out.println("****************** 1 *******************");
}
}
#TestComponent
protected static class TestListener2 {
#JmsListener(destination = "asx2ras")
public void receiveMessage(String message) {
throw new RuntimeException("Nope");
}
}
#Configuration
protected static class Config {
#Bean
public RedeliveryPolicy redeliveryPolicy() {
RedeliveryPolicy topicPolicy = new RedeliveryPolicy();
topicPolicy.setMaximumRedeliveries(1);
return topicPolicy;
}
#Bean
public ConnectionFactory connectionFactory(#Value("${spring.activemq.user}") final String username,
#Value("${spring.activemq.password}") final String password,
#Value("${spring.activemq.broker-url}") final String brokerUrl) {
ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory(username, password, brokerUrl);
cf.setRedeliveryPolicy(redeliveryPolicy());
return cf;
}
}
}
I can send a message
I receive the message with both listeners
One listener will always work and just print some console message
The other listener will always throw an exception
I set the retry to "1", so the failing listener will be called one more time after the exception was thrown. However, after the retry, the message is not delivered to an error queue (or error topic). How can I send the message to an error queue so that I can call the failing listener again later?
Note that I only want to call the failing listener again, not all listeners on the topic. Is that possible?
EDIT
Here's my activemq.xml (just the broker tag):
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
According to the official documentation, you can override the broker configurations on the client side:
The broker transmits the default delivery policy that he prefers to a
client connection in his BrokerInfo command packet. But the client can
override the policy settings by using the
ActiveMQConnection.getRedeliveryPolicy() method:
RedeliveryPolicy policy = connection.getRedeliveryPolicy();
policy.setInitialRedeliveryDelay(500); policy.setBackOffMultiplier(2);
policy.setUseExponentialBackOff(true);
policy.setMaximumRedeliveries(2);
So, the way you have configured the redelivery policy seems OK.
The only issue that I see is when you create a new instance of RedeliveryPolicy and set only a single field topicPolicy.setMaximumRedeliveries(1); all other fields that are primitives will be assigned default values. You should probably set the maximum redeliveries on the existing instance of redelivery policy:
RedeliveryPolicy policy = cf.getRedeliveryPolicy();
policy.setMaximumRedeliveries(1);
Edit
Also, make sure that by using #JmsListener is not using CLIENT_ACKNOWLEDGE. According to this thread the messages will not get redelivered when CLIENT_ACKNOWLEDGE is used.
Trying to get my Liberty profile server to create a transacted jmsConnectionFactory to my instance of Websphere message queue.
Liberty profile v 8.5.5.5
Websphere MQ 7.x
I've tried to change the jmsQueueConnectionFactory to jmsXAQueueConnectionFactory but it does not help -> Then it seems to just ignore it and doesn't connect to the MQ
server.xml
<?xml version="1.0" encoding="UTF-8"?>
<server description="new server">
<!-- Enable features -->
<featureManager>
<feature>wmqJmsClient-1.1</feature>
<feature>jndi-1.0</feature>
<feature>jsp-2.2</feature>
<feature>localConnector-1.0</feature>
</featureManager>
<variable name="wmqJmsClient.rar.location" value="D:\wlp\wmq\wmq.jmsra.rar"/>
<jmsQueueConnectionFactory jndiName="jms/wmqCF" connectionManagerRef="ConMgr6">
<properties.wmqJms
transportType="CLIENT"
hostName="hostname"
port="1514"
channel="SYSTEM.DEF.SVRCONN"
queueManager="QM"
/>
</jmsQueueConnectionFactory>
<connectionManager id="ConMgr6" maxPoolSize="2"/>
<applicationMonitor updateTrigger="mbean"/>
<application id="App"
location="...\app.war"
name="App" type="war"/>
<!-- To access this server from a remote client add a host attribute to the following element, e.g. host="*" -->
<httpEndpoint id="defaultHttpEndpoint"
httpPort="9080"
httpsPort="9443"/>
</server>
log
2015-04-23 17:07:14,981 [JmsConsumer[A0]] WARN ultJmsMessageListenerContainer - Setup of JMS message listener invoker failed for destination 'A0' - trying to recover. Cause: Could not commit JMS transaction; nested exception is com.ibm.msg.client.jms.DetailedIllegalStateException: JMSCC0014: It is not valid to call the 'commit' method on a nontransacted session. The application called a method that must not be called on a nontransacted session. Change the application program to remove this behavior.
2015-04-23 17:07:14,983 [JmsConsumer[A0]] INFO ultJmsMessageListenerContainer - Successfully refreshed JMS Connection
Camel code
public static JmsComponent mqXAComponentTransacted(InitialContext context, String jndiName) throws JMSException, NamingException {
return JmsComponent.jmsComponentTransacted((XAQueueConnectionFactory) context.lookup(jndiName));
}
Ended up with:
public static JmsComponent mqXAComponentTransacted(InitialContext context, String connectionFactoryJndiName, String userTransactionJndiName) throws JMSException, NamingException {
LOG.info("Setting up JmsComponent using jndi lookup");
final JtaTransactionManager jtaTransactionManager = new JtaTransactionManager((UserTransaction) context.lookup(userTransactionJndiName));
final ConnectionFactory connectionFactory = (ConnectionFactory) context.lookup(connectionFactoryJndiName);
final JmsComponent jmsComponent = JmsComponent.jmsComponentTransacted(connectionFactory, (PlatformTransactionManager) jtaTransactionManager);
jmsComponent.setTransacted(false);
jmsComponent.setCacheLevel(DefaultMessageListenerContainer.CACHE_NONE);
return jmsComponent;
}
I am using jboss AS 6 Final on ubuntu with hornetQ
I have created a new Queue on the server named Message Buffer Queue using the admin panel.
I get the following error:
Unable to validate user: guest for check type CONSUME for address jms.queue.MessageBufferQueue
Here are my files:
package org.jboss.ejb3timers.example;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.Enumeration;
import java.util.Hashtable;
import java.util.UUID;
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.Message;
import javax.jms.MessageConsumer;
import javax.jms.MessageProducer;
import javax.jms.ObjectMessage;
import javax.jms.Queue;
import javax.jms.QueueBrowser;
import javax.jms.QueueConnection;
import javax.jms.QueueConnectionFactory;
import javax.jms.QueueSession;
import javax.jms.Session;
import javax.naming.Context;
import javax.naming.InitialContext;
public class TestClass {
ConnectionFactory Hconnection=null;
Queue q=null;
Connection Hconn=null;
Context lContext=null;
MessageConsumer messageConsumer=null;
MessageProducer messageProducer=null;
javax.jms.Session session=null;
/**
* #param args
*/
public void sendMessagetoJMS(String sender,String receiver,String Message,String smsc,String Credit,String userid,long ctime,String savenumber)
{
int count=0;
Hashtable<String, String> ht = new Hashtable<String, String>();
ht.put(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
ht.put(Context.PROVIDER_URL, "127.0.0.1");
ht.put(Context.URL_PKG_PREFIXES, "org.jboss.naming:org.jnp.interfaces");
try{
lContext = new InitialContext(ht);
Hconnection = (ConnectionFactory) lContext.lookup("ConnectionFactory");
q = (Queue) lContext.lookup("queue/MessageBufferQueue");
Hconn = (Connection) Hconnection.createConnection("guest","guest");
session = Hconn.createSession(false, Session.AUTO_ACKNOWLEDGE);
messageProducer = session.createProducer(q);
/*
* Insert into Database
*/
UUID id=UUID.randomUUID();
Hconn.start();
textmsg msg = new textmsg();
msg.setReciever(receiver);
msg.setSender(sender);
msg.setText(Message);
msg.setSmsc(smsc);
msg.setCredit(Credit);
msg.setUserid(userid);
msg.setCtime(ctime);
msg.setId(id.toString());
ObjectMessage message = session.createObjectMessage();
message.setObject(msg);
messageProducer.send(message);
System.out.println("Message sent ");
Hconn.close();
}
catch(Exception ex)
{
ex.printStackTrace();
}
}
public int getQueueSize()
{
Hashtable<String, String> ht = new Hashtable<String, String>();
ht.put(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
ht.put(Context.PROVIDER_URL, "127.0.0.1");
ht.put(Context.URL_PKG_PREFIXES, "org.jboss.naming:org.jnp.interfaces");
InitialContext ctx;
int numMsgs = 0;
try {
ctx = new InitialContext(ht);
QueueConnectionFactory connFactory = (QueueConnectionFactory) ctx.lookup("ConnectionFactory");
Queue queue = (Queue) ctx.lookup("queue/MessageBufferQueue");
QueueConnection queueConn = connFactory.createQueueConnection("guest","guest");
QueueSession queueSession = queueConn.createQueueSession(false,Session.AUTO_ACKNOWLEDGE);
QueueBrowser queueBrowser = queueSession.createBrowser(queue);
queueConn.start();
Enumeration e = queueBrowser.getEnumeration();
DateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSSSSS");;
String s=null;
while (e.hasMoreElements()) {
Message message = (Message) e.nextElement();
s = df.format(message.getJMSTimestamp());
System.out.println("=================1===================Timestamp it got to the queue"+s);
numMsgs++;
}
queueConn.close();
} catch (Exception e1) {
// TODO Auto-generated catch block
System.out.println(e1.getMessage());
}
return numMsgs;
}
public static void main(String[] args) {
int i = 0;
TestClass tc = new TestClass();
System.out.println(tc.getQueueSize());
tc.sendMessagetoJMS("jk", "gv", "Hey there", "somesmsc", "34", "thedon", 234233634, "2423487");
System.out.println(tc.getQueueSize());
}
}
My HornetQ config file is
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
<!-- Make Queue Persistent -->
<persistence-enabled>true</persistence-enabled>
<log-delegate-factory-class-name>org.hornetq.integration.logging.Log4jLogDelegateFactory</log-delegate-factory-class-name>
<bindings-directory>${jboss.server.data.dir}/hornetq/bindings</bindings-directory>
<journal-directory>${jboss.server.data.dir}/hornetq/journal</journal-directory>
<!-- Default journal file size is set to 1Mb for faster first boot -->
<journal-file-size>${hornetq.journal.file.size:1048576}</journal-file-size>
<!-- Default journal min file is 2, increase for higher average msg rates -->
<journal-min-files>${hornetq.journal.min.files:2}</journal-min-files>
<!-- create new user named guest as the default user -->
<defaultuser name="guest" password="guest">
<role name="guest"/>
</defaultuser>
<!-- create new user named admin with admin stuff -->
<user name="admin" password="admin">
<role name="admin"/>
</user>
<large-messages-directory>${jboss.server.data.dir}/hornetq/largemessages</large-messages-directory>
<paging-directory>${jboss.server.data.dir}/hornetq/paging</paging-directory>
<connectors>
<connector name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="${jboss.bind.address:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5445}"/>
</connector>
<connector name="netty-throughput">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="${jboss.bind.address:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.batch.port:5455}"/>
<param key="batch-delay" value="50"/>
</connector>
<connector name="in-vm">
<factory-class>org.hornetq.core.remoting.impl.invm.InVMConnectorFactory</factory-class>
<param key="server-id" value="${hornetq.server-id:0}"/>
</connector>
</connectors>
<acceptors>
<acceptor name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="host" value="${jboss.bind.address:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5445}"/>
</acceptor>
<acceptor name="netty-throughput">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="host" value="${jboss.bind.address:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.batch.port:5455}"/>
<param key="batch-delay" value="50"/>
<param key="direct-deliver" value="false"/>
</acceptor>
<acceptor name="in-vm">
<factory-class>org.hornetq.core.remoting.impl.invm.InVMAcceptorFactory</factory-class>
<param key="server-id" value="0"/>
</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
<!-- Admin can create durable and non durable queues -->
<!-- Add permisions to make a durabe queue for guest -->
<permission type="createDurableQueue" roles="admin"/>
<!-- Add permisions to make a durabe queue for guest -->
<permission type="deleteDurableQueue" roles="admin"/>
<permission type="consume" roles="guest"/>
<permission type="send" roles="guest"/>
</security-setting>
</security-settings>
<address-settings>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>10485760</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>BLOCK</address-full-policy>
</address-setting>
</address-settings>
</configuration>
my stack trace is :
log4j:WARN No appenders could be found for logger (org.jnp.interfaces.TimedSocketFactory).
log4j:WARN Please initialize the log4j system properly.
Unable to validate user: guest for check type CONSUME for address jms.queue.MessageBufferQueue
0
javax.jms.JMSSecurityException: Unable to validate user: guest for check type SEND for address jms.queue.MessageBufferQueue
at org.hornetq.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:287)
at org.hornetq.core.client.impl.ClientProducerImpl.doSend(ClientProducerImpl.java:285)
at org.hornetq.core.client.impl.ClientProducerImpl.send(ClientProducerImpl.java:139)
at org.hornetq.jms.client.HornetQMessageProducer.doSend(HornetQMessageProducer.java:451)
at org.hornetq.jms.client.HornetQMessageProducer.send(HornetQMessageProducer.java:199)
at org.jboss.ejb3.timerservice.example.TestClass.sendMessagetoJMS(TestClass.java:70)
at org.jboss.ejb3.timerservice.example.TestClass.main(TestClass.java:117)
Caused by: HornetQException[errorCode=105 message=Unable to validate user: guest for check type SEND for address jms.queue.MessageBufferQueue]
... 7 more
Unable to validate user: guest for check type CONSUME for address jms.queue.MessageBufferQueue
0
11 Apr, 2011 7:35:54 PM org.hornetq.core.logging.impl.JULLogDelegate warn
WARNING: I'm closing a JMS connection you left open. Please make sure you close all JMS connections explicitly before letting them go out of scope!
What seems to be the problem the problem ?
it took me a long time to solve this issue, and the answer is the HornetQ reference documentation:
JBoss can be configured to allow client login, basically this is when a Java EE component such as a Servlet or EJB sets security credentials on the current security context and these are used throughout the call.
If you would like these credentials to be used by HornetQ when sending or consuming messages then set allowClientLogin to true. This will bypass HornetQ authentication and propgate the provided Security Context. If you would like HornetQ to authenticate using the propogated security then set the authoriseOnClientLogin to true also.
The important part is: if you would like these credentials to be used by HornetQ when sending or consuming messages then set allowClientLogin to true
In my case, for test purpose I disactivated authentication in my application, and thus the credentials were not propagated anymore in the security context.
While trying to create queues with
queueConnection = connectionFactory.createQueueConnection("guest", "guest");
I got the exception: HornetQException[errorCode=105 message=Unable to validate user: guest
And when trying to create queues with
queueConnection = connectionFactory.createQueueConnection();
I got the exception: HornetQException[errorCode=105 message=Unable to validate user: null
After setting allowClientLogin to true in $JBOSS_HOME/server//deploy/hornetq/hornetq-jboss-beans.xml, I finally succed in creating the queues.
<bean name="HornetQSecurityManager" class="org.hornetq.integration.jboss.security.JBossASSecurityManager">
<start ignored="true"/>
<stop ignored="true"/>
<depends>JBossSecurityJNDIContextEstablishment</depends>
<property name="allowClientLogin">true</property>
<property name="authoriseOnClientLogin">true</property>
</bean>
i'm having a similar problem with Jboss6 Final.
Are you using a security domain (for instance in your jboss.xml <security-domain>unirepo</security-domain>)?
Is the user "guest" "guest" authenticated in your security domain? (i know it shouldn't be necessary, but seem a weird behavior of Jboss 6).
You can also have a look here: https://issues.jboss.org/browse/JBAS-8895
and here: http://community.jboss.org/message/587605
It might be fixed in jboss 6.1 :(