Configuring resource in Tomcat's context.xml to access remote Weblogic JMS queues - jms

What I intend to do is access remote queues in Oracle Weblogic JMS (version 10.3.4) from a spring application deployed in Tomcat7.
For this I am trying to configure a Resource (eg JMS connection factory, queues etc) in Tomcat's context.xml file. Then access this resource using jndi lookup in the spring configuration file and provide it to the necessary beans. I have already created connection factory and queues in Weblogic JMS and they can be accessed using jndi names.
I am able to make it work successfully when using ActiveMQ instead of Weblogic JMS. However with Weblogic JMS, I am facing an issue with configuring the Resource element. I am not sure what attributes to be used with Resource tag while connecting to Oracle Weblogic JMS.
When working with ActiveMQ the resource element config looks like below
<Resource name="jms/MyConnectionFactory" auth="Container"
type="org.apache.activemq.ActiveMQConnectionFactory"
factory="org.apache.activemq.jndi.JNDIReferenceFactory"
description="JMS Queue Connection Factory"
brokerURL="tcp://localhost:61616" brokerName="MyActiveMqBroker"/>
I am struggling to find the configuration to be used with Oracle Weblogic JMS. I have gone through documentations to see how to do it but with no luck.
Any help or pointers would be highly appreciated.
Thanks.

Related

JMS provider URL for jboss in-vm destination

Running app on EPA 7 with log4j2. We have an appender that writes to a JMS queue.
I got it working for remote connection as below:
<JMS name="HIFAuditAppender"
destinationBindingName="jms/queue/HIFAuditQueue"
factoryBindingName="jms/RemoteConnectionFactory"
providerURL="http-remoting://127.0.0.1:8080"
username="hcmuser"
password="gators123="
factoryName="org.jboss.naming.remote.client.InitialContextFactory" />
However, the JMS producer and MDB are running in the same JVM. I want to use the jboss in-vm connector, but have not been able to determine what the providerURL should be set to.
The providerURL will be same i.e http-remoting://127.0.0.1:8080. You need to use in-vm connection factory i.e /ConnectionFactory which uses in-vm connector and used to produce/consume messages locally.

not able to send message to remote Tomcat and ActiveMQ queue

I have a set of JMS queues configured on a Tomcat + ActiveMQ server. I have created a Resource for connectionFactory (by pointing the Broker to tcp:localhost:61616) and a resource for Queue in Server.xml inside global resource and then declared these 2 resource as in context.xml.
Now I'm trying to connect this queue from another tomcat server and send message. When trying to get lookup for connection factory with jndi name I am getting NamingException - name not bound in this context for the ConnectionFactory JNDI.
I tried to achieve like this:
set below values in properties
Provider_URL = http://Localhost:8080
INITIAL_CONTEXT_FACTORY = org.apache.naming.java.javaURLContextFactory
and then create new InitialContext with above property value and then lookup for the jndi of connectionFactory.
But its not working.
I tried other option to directly get context of the ActiveMQ by using ActiveMQContextFactory as INITIAL_CONTEXT_FACTORY and Provider_URL= tcp://localhost:61616
and I kept a simple jndi.properties file in the classpath of client tomcat server as it is mentioned in other posts. And It works well.
But I do not like it. Because when I configured the activeMQ connection and Queue as a resource in Tomcat, that means I should be able to get JNDI lookup through Tomcat and not directly to Active mq. I do not want to give ActiveMQ host:port to all my clients.
I assume there must be something incorrect on Tomcat server configuration that its connectionfactory/queue JNDI is not available for lookup outside.
I have struggle a lot today. Can someone expert point me to some possible mistake.
Thanks in advance.

WebSphere JNDI lookup in non managed threads

I have a Java EE application (actually, it is an apache camel application) deployed on WebSphere Application Server 7.
My application consumes service requests from Web Services (threads started from the servlet container in WAS) and from JMS queues (not SI-BUS, but WebSphere MQ if that matters). For the JMS listener, Camel (or the underlying spring framework perhaps) initiates own threads (seems to be simple java threads more or less) to deal with JMS requests.
I also have a transactional Database attached to the application. So, in spring, I have something like this definied to grab a transaction manager (WebSphere built in JTA probably).
<tx:annotation-driven/>
So my problem is, that I get an error like this when a Camel/JMS is triggering an event in the application:
org.apache.openjpa.persistence.PersistenceException: TransactionManager not found in JNDI under name java:comp/websphere/ExtendedJTATransaction
Seems like threads not initiated by the container itself cannot do JNDI lookups correct. Is there a way around this issue?

Weblogic JMS server configuration: JMS module to talk to JMS Server

I am fairly to new to JMS configuration in JMS.
Here is what i am trying to do.
We have multile JVMs of our applications in a single weblogic domain. We want to have JMS server installed on say one JVM and rest of the JVMs refer to the first JMS Server.
So, the configuration is:
JVM1: JMS Server is installed
JVM2: JMS Module installed
Now I need to configure JVM2 to talk to JMS server on JVM1. How do i do that?
This is on weblogic 11g
I suggest going through the basics of WebLogic 11 JMS configuration and then taking a look into this good guide from Oracle documentation. I know there is a lot of info over there, but in the long run it is better to know what you are doing rather than just copying someone else's configurations.

How to configure ehcache.xml to use JMS + ActiveMQ + Tomcat and enable tomcat to start even if the JMS server is down?

I a using ehcache with JMS replication and ActiveMQ as a JMS server.
It is currently used to cache database results with Hibernate 3.6.7
My cacheManager is configured as above:
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.jms.JMSCacheManagerPeerProviderFactory" properties="initialContextFactoryName=br.com.sonner.iss.jms.ExampleActiveMQInitialContextFactory,
providerURL=failover:tcp://localhost:6969,userName=XXX,password=YYYY,
replicationTopicConnectionFactoryBindingName=topicConnectionFactory,
replicationTopicBindingName=ehcache,
getQueueConnectionFactoryBindingName=queueConnectionFactory,
getQueueBindingName=ehcacheGetQueue"
propertySeparator=","
/>
The replication and failover is working as it should. If the JMS goes down (the replication stops) and the starts over again when the JMS server starts again.
The only problem that I am facing is that if the JMS server is down at the moment of the startup of my app. The app does not start.
Probably because when Spring goes up it reads the hibernate configurations and when hibernate is loading the app just freezes waiting for the broker.
Does anyone has a work-arround this issue?
I am thinking if there is a way to use the spring jmsTemplate configuration to configure ehcache.xml jndi entries.

Resources