ActiveMQ embedded on cluster instance - spring

I'm implementing a project with ActiveMQ embedded in a cluster instance with schema DB shared in MuleSoft 3.5.1.
The broker has been configured with spring bean
<spring:beans>
<spring:bean class="org.apache.activemq.xbean.BrokerFactoryBean"
id="broker">
<spring:property value="classpath:testActivemq.xml"
name="config" />
<spring:property value="true" name="start" />
</spring:bean>
</spring:beans>
I have tested the system in local mule server stand alone to simululate cluster.
The problem is when I make the deploy the first broker, it grabs an exclusive lock on a table to ensure that no other ActiveMQ broker can access the database at the same time,but the other broker doesn't finish the deploy process and so when I try to undeploy the first broker the server goes in block.
How to resolve my issue?

What you are looking at is ActiveMQ's built in master-slave functionality. To start multiple brokers in the same VM, you need to point their storage locations to different databases (if using JDBC storage) or filesystem locations (if using KahaDB or LevelDB).
See the following on how to do that:
JDBC http://activemq.apache.org/jdbc-master-slave.html
KahaDB http://activemq.apache.org/kahadb.html
LevelDB http://activemq.apache.org/leveldb-store.html

Related

How Ignite cluster works

I m novice to apache ignite.
My requirement is to replace the coherence cache with the ignite cache.
As per my understanding to make the cache distributed used the below cache configuration.
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>1xx.xxx.xx.xxx</value>
<value>1xx.xxx.xx.xxx</value>
<value>1xx.xxx.xx.xxx</value>
<value>1xx.xxx.xx.xxx</value>
<value>myapplicationport</value>
</list>
</property>
</bean>
with same configuration Integrating spring boot with ignite developed a microservice and deployed in all the 4 servers.
Could see only in one server my microservice is up and running while in the other servers ignite got stopped.
Even i tried without providing the port only the ip address.
case2:
Deployed only in one server with one ip in cache configuration the cache stopped with 2 days of deployment.
What should be done to make the cache to be up always.
Because we are also planning to migrate a schema of DB into cache.
Please help on this

Are there any benefits in using the "failover:" protocol with the "vm:" transport in ActiveMQ?

According to this answer, there are benefits from using the "failover" protocol with a "tcp" transport, even with a single address.
In addition to this, the ActiveMQ documentation, the following applies (emphasis mine):
If a JMS broker goes down, ActiveMQ can automatically reconnect to an available JMS broker using the failover: protocol. Not only does this automatically reconnect, it will also resume any temporary destinations, sessions, producers and most importantly consumers.
Does this also apply when using the "vm" transport?
We are seeing frequent issues with queue consumers stopping to pick up messages, while the queue fills up, and we have not found a fix for this yet. This is with ActiveMQ v5.6.0 - we're upgrading to v5.14.5 at the moment, but want to explore additional options, too.
Our current Spring configuration for the ActiveMQConnectionFactory looks like this:
<bean id="jmsConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory"
depends-on="amqEmbeddedBroker">
<property name="brokerURL" value="vm://localhost" />
<property name="watchTopicAdvisories" value="false" />
</bean>
Would changing the URL from vm://localhost to failover:(vm://localhost) provide any benefit in this case, i.e. safe-guarding against connections being closed for whatever reason? When changing the URL to include the failover: part, I can see that an instance of FailoverTransport is created, but does it provide any benefit in the case of the vm transport?
Failover will try to get a reconnection if the connection fails. So if you're doing an operation that would normally crash of connection failure, you won't see an exception but it will silently try to reconnect. So if your in memory broker goes dead, the client will be silent and perhaps issue some log that it tries to reconnect.
VM connections do not fail because of network issues, so you may want to investigate further. But upgrading seems like the first step.

How to configure Spring XD to connect remote Gemfire

I had configured the Spring-XD, application context.xml to connect to remote gemfire db. I am unable to connect to remote DB. It goes and connect to local gemfire which comes as part of Spring XD installation. Please can anyone assist what must be wrong.
Configuration to disable local gemfire and configure to connect remote :
/spring-xd-1.2.0.RELEASE/xd/config/modules/modules.yml
gemfire:
useLocator: true
host: remote-ip-address
port: 44444
**Configuration to remote connection gemfire - spring-module.xml**
<bean id="template" class="org.springframework.data.gemfire.GemfireTemplate">
<property name="region" ref="restaurants" />
</bean>
<util:properties id="gemfire-props">
<prop key="log-level">warning</prop>
</util:properties>
<gfe:cache properties-ref="gemfire-props" />
<gfe:cache-server bind-address="localhost" port="44444" />
<gfe:replicated-region id="restaurants">
</gfe:replicated-region>
When we deploy custom moudules and run it on spring-xd shell which access and store object in gemfire template it goes and saves it in local gmefire instead of remote gemfire database. Please can anyone guide or suggest right way of confuguring gemfire db.
Regards,
Cleophus P.
XD modules that access Gemfire, e.g., the gemfire source and gemfire sink use client-server configuration.
You have a remote cache server and the module is a client. Connecting via a locator requires all servers and clients in the grid to share the locator addresses. Assuming your Gemfire server installation already has one or more locators running, the module context should contain a client-cache configured with the same locator addresses as the remote cache server. If you are not familiar with Gemfire client-server topology, I suggest you review the product documentation and get a simple stand alone example running against your cache server before attempting to deploy your XD modules.

Amazon ec2 DB connection between two instances with hibernate

I have one two ec2 instances. One is a web service with DB and another one is a simple web module. Web service connects to DB locally while I want web module instance to connect to the web service instance DB. Both have a stack of Java- Hibernate inside Tomcat with MySQL as database.
I have created one security group and assigned that to both the server. Configuration for DB is as below:
MySQL - TCP Protocol - 3306 port - Source as the Group ID for the same configuration.
The hibernate configuration for web module looks like below:
<property name="connection.url">jdbc:mysql://<web service server ip>:3306/<db name></property>
<property name="connection.username">root</property>
<property name="connection.password">password</property>
<property name="connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="hibernate.dialect">org.hibernate.dialect.MySQL5Dialect</property>
But I am not able to connect to the DB. If I change the server configuration to accept any IP connection then I can connect but with security group, it fails.
Any pointers?
I think that your DB server is setup to refuse external connections.
Solution

Configuring resource in Tomcat's context.xml to access remote Weblogic JMS queues

What I intend to do is access remote queues in Oracle Weblogic JMS (version 10.3.4) from a spring application deployed in Tomcat7.
For this I am trying to configure a Resource (eg JMS connection factory, queues etc) in Tomcat's context.xml file. Then access this resource using jndi lookup in the spring configuration file and provide it to the necessary beans. I have already created connection factory and queues in Weblogic JMS and they can be accessed using jndi names.
I am able to make it work successfully when using ActiveMQ instead of Weblogic JMS. However with Weblogic JMS, I am facing an issue with configuring the Resource element. I am not sure what attributes to be used with Resource tag while connecting to Oracle Weblogic JMS.
When working with ActiveMQ the resource element config looks like below
<Resource name="jms/MyConnectionFactory" auth="Container"
type="org.apache.activemq.ActiveMQConnectionFactory"
factory="org.apache.activemq.jndi.JNDIReferenceFactory"
description="JMS Queue Connection Factory"
brokerURL="tcp://localhost:61616" brokerName="MyActiveMqBroker"/>
I am struggling to find the configuration to be used with Oracle Weblogic JMS. I have gone through documentations to see how to do it but with no luck.
Any help or pointers would be highly appreciated.
Thanks.

Resources