how to connect multiple redis instances through spring data redis? - spring

I am trying to connect multiple redis instances via spring. But I did not find any documentation.
Here is how I am using it currently. I am using Jedis as the client and I plan on using Jedis only as I might require support for sentinel.
<bean id="jedisConnFactory"
class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory">
<property name ="hostName" value ="localhost"/>
<property name="port" value="6379" />
</bean>
<bean id="stringRedisSerializer"
class="org.springframework.data.redis.serializer.StringRedisSerializer" />
<!-- redis template definition -->
<bean id="redisTemplate" class="org.springframework.data.redis.core.RedisTemplate"
p:connection-factory-ref="jedisConnFactory"
p:keySerializer-ref="stringRedisSerializer"
p:hashKeySerializer-ref="stringRedisSerializer"
p:ValueSerializer-ref="stringRedisSerializer" />
I want to add multiple redis instances to the connection pool. Like..
<property name ="hosts" value ="localhost:6379,localhost:6380"/>

After researching , I found, there is no support for client side partitioning currently in spring-data-redis.
In future the partitioning technique in redis shall move to redis-cluster permanently.
At present, To use partition along with spring-data-redis, the best way is to use twemproxy and point JedisConnectionFactory host and port to twemproxy.

In case you're looking for support of JedisSentinelPool then have a look at does-spring-data-redis-1-3-2-release-support-jedissentinelpool-of-jedis.

Related

Spring Cloud AWS: optional cache manager

Spring cache configuration allows to fall back to no cache using CompositeCacheManager with fallbackToNoOpCache property set to true. How could this be used with spring-cloud-aws cache manager so that, when a non-existing cache cluster is specified, the composite cache manager falls back to no cache? With an example configuration like this:
<aws-cache:cache-manager>
<aws-cache:cache-cluster name="CacheCluster" />
</aws-cache:cache-manager>
the application just won't start when there's no cluster named CacheCluster configured. When a CompositeCacheManager is configured like this:
<aws-cache:cache-manager id="elasticacheManager">
<aws-cache:cache-cluster name="CacheCluster" />
</aws-cache:cache-manager>
<bean id="cacheManager" class="org.springframework.cache.support.CompositeCacheManager">
<property name="cacheManagers">
<list>
<ref bean="elasticacheManager" />
</list>
</property>
<property name="fallbackToNoOpCache" value="true"/>
</bean>
with a non-existing cache CacheCluster, then the application fails to start up complaining: "No bean named 'elasticacheManager' is defined".
Is there a way to create an AWS cache manager programmatically and use something like a FactoryBean for this?
Currently Spring Cloud AWS does not support the configuration of a fallback cache. I will add it to our backlog as a feature request. In the meantime you could use the same workaround I did in the reference application using spring profiles (see ReferenceApplication.java).

Hazelcast with spring namespace - init the node when context is loaded

i have hazelcast instance defined using the hazelcast name space and a map in it. also using spring cache abstraction to define cacheManager.
<bean name="siteAdminPropertyPlaceHolderConfigurer"
class="org.sample.SiteAdminPropertyPlaceHolderConfigurer">
<property name="order" value="1000"/>
<!-- last one-->
</bean>
<!-- hazelcast cache manager -->
<hz:hazelcast id="instance" lazy-init="true">
<hz:config>
<hz:group name="${HAZEL_GROUP_NAME}" password="${HAZEL_GROUP_PASSWORD}"/>
<hz:network port="${HAZEL_NETWORK_PORT}" port-auto-increment="true">
<hz:join>
<hz:multicast enabled="${HAZEL_MULTICAST_ENABLED}"
multicast-group="224.2.2.3"
multicast-port="54327"/>
<hz:tcp-ip enabled="${HAZEL_TCP_ENABLED}">
<hz:members>${HAZEL_TCP_MEMBERS}</hz:members>
</hz:tcp-ip>
</hz:join>
</hz:network>
<hz:map name="oauthClientDetailsCache"
backup-count="1"
max-size="0"
eviction-percentage="30"
read-backup-data="true"
eviction-policy="NONE"
merge-policy="com.hazelcast.map.merge.PassThroughMergePolicy"/>
</hz:config>
</hz:hazelcast>
<bean id="hazelcastCacheManager" class="com.hazelcast.spring.cache.HazelcastCacheManager" lazy-init="true"
depends-on="instance">
<constructor-arg ref="instance"/>
</bean>
The problem is that ,this spring context is also used for other tools we have besides the server and that hazelcast starts listening on the port and the tool actually never exit.
i tried to disable all network join (enabled=false) and i though to enable them programatically only when the server starts. but it does not work hazelcast still starts.
i don't want to give up the spring name space as its very convenient for developers to define new maps(spring caches). also i want as little hazelcast code in there.
any idea how to achieve this ?
thanks
Shlomi
I didn't find a way to do this except telling hazecast to shutdown at the end of each tool run.
i also moved the definition above to separated XML context file so it would not be loaded by the tools (at least not all of them)
Hazelcase.shutdownAll();

Spring + Hibernate Search dynamic configuraion

I'm currently trying to configure hibernate search via spring across 3 machines for the purpose of using a JMS distributed index. Due to the way we deploy our software I have to use the same configuration across all three machines but I need a way to set one of them to be the JMS Master.
Currently hibernate is being configured via Spring using the following bean declaration:
<bean class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"
id="productSessionFactory">
<property name="dataSource">
<ref local="productDataSource"/>
</property>
<property name="configLocation">
<value>classpath:hibernate.cfg.xml</value>
</property>
<property name="entityInterceptor" ref="builderInterceptor"/>
<property name="eventListeners">
<map key-type="java.lang.String" value-type="java.lang.Object">
<entry key="save" value-ref="saveEventListener"/>
</map>
</property>
</bean>
On one of the three machines I need to set the property hibernate.search.default.directory_provider to filesystem-master and on the other two I need to set it to filesystem-slave.
I have the ability to set flags on the individual machines to identify which machine should be the master but due to all the configuration being XML I dont have any ability to add logic to set the parameters correctly.
Is there an way to set this parameter programmatically while leaving the rest of the configuration alone?
Thanks!
A programmatic way is generally possible, but I am not sure exactly how you do that in Spring. Instead of putting your properties into a config file you would have to build the properties dynamically (or at least partly dynamically) and pass it to AnnotationSessionFactoryBean. If I am not mistaken there are hooks in the Spring SPI which should allow you to do that, eg BeanDefinitionRegistryPostProcessor.
The other approach would be to write your own DirectoryProvider. Have a look at org.hibernate.search.store.impl.FSMasterDirectoryProvider and org.hibernate.search.store.impl.FSSlaveDirectoryProvider and write a provider which can act as slave or master depending on the flag you can read on the machine.

Spring JMS Connections: performance considerations

I have the need to send/receive messages towards/from different topics stored on a single JMS Server.
I would like to use JmsTemplate for sending and MessageListenerContainer for registering asyncronous listeners.
My configuration looks like this:
<bean id="jndiTemplate" class="org.springframework.jndi.JndiTemplate">
<property name="environment">
<props>
<prop key="java.naming.factory.initial">xxx</prop>
<prop key="java.naming.provider.url">yyy</prop>
</props>
</property>
</bean>
<bean id="connectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref ="jndiTemplate"/>
<property name="jndiName" value="TopicConnectionFactory"/>
</bean>
<bean id="singleConnectionFactory" class="org.springframework.jms.connection.SingleConnectionFactory">
<constructor-arg ref="connectionFactory"/>
</bean>
<bean id="tosJmsTemplate" class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="singleConnectionFactory"/>
<property name="destinationResolver" ref="destinationResolver"/>
<property name="pubSubDomain" value="true"/>
</bean>
As far as I understood, the singleConnectionFactory, returning always the same connection instance, helps reducing the overhead of creating and closing
a connection each time a jmsTemplate needs (for example) to send/receive a message (as it would be when using a normal ConnectionFactory).
My first question is: if I create multiple jmsTemplate(s), can they all share a ref to a singleConnectionFactory? Or do they have to receive a distinct instance each (singleConnectionFactory1, singleConnectionFactory2, etc)?
Reading the API for SingleConnectionFactory, I found this:
Note that Spring's message listener containers support the use of a shared Connection
within each listener container instance. Using SingleConnectionFactory in combination only really makes sense for sharing a single JMS Connection across multiple listener containers.
This sound a bit cryptic to me. As far as I know, it is possible to register only 1 Listener per MessageListenerContainer, so I don't understand to what extent is a connection shared.
Suppose I want to register N Listeners: I will need to repeat N times something like this:
<bean
class="org.springframework.jms.listener.SimpleMessageListenerContainer">
<property name="connectionFactory" ref="connectionFactory" />
<property name="destinationName" value="destX" />
<property name="messageListener" ref="listener1outOfN" />
</bean>
How many Connections are created in such case from connectionFactory? One for each ListenerContainer or just a pool of Connections? And what if I provide the SimpleMessageListenerContainer-s with a ref to singleConnectionFactory?
What is the best approach (from the point of view of the performances, of course) in this case?
if I create multiple jmsTemplate(s), can they all share a ref to a singleConnectionFactory?
Yes, this is fine. The javadoc for SingleConnectionFactory says:
According to the JMS Connection model, this is perfectly thread-safe (in contrast to e.g. JDBC).
JMS Connection objects are thread-safe, and can be used by multiple threads concurrenctly. So there's no need to use multiple SingleConnectionFactory beans.
As far as I know, it is possible to register only 1 Listener per MessageListenerContainer, so I don't understand to what extent is a connection shared.
This is true; however, each MessageListenerContainer can have multiple threads processing messages concurrently, all using the same MessageListener object. The MessageListenerContainer will use a single, shared Connection for all of these threads (unless configured to do otherwise).
Note that Spring's message listener containers support the use of a shared Connection within each listener container instance. Using SingleConnectionFactory in combination only really makes sense for sharing a single JMS Connection across multiple listener containers.
In other words, if all you have is a single MessageListenerContainer, then SingleConnectionFactory is unnecessary, since the single connection is managed internally to MessageListenerContainer. if you have multiple listener containers, and want them all to share a connection, then SingleConnectionFactory is required. Also, if you want to share a connection between listening and sending, as you do, then SingleConnectionFactory is also necessary.

Validating Connection Before Handing over to WebApp in ConnectionPooling

I have connection pooling implemented in spring using Oracle Data Source. Currently we are facing an issue where connections are becoming invalid after a period of time. (May be Oracle is dropping those idle connections after a while). Here are my questions:
Can Oracle database be configured to drop idle connections automatically after a specific period of time. Since we expect those connections to lie idle for a while; if there is any such configuration; it may be happening.
In our connection pooling properties in spring we didn't have "validateConnection" property. I understand that it validates the connection before handing it over to web application? But does that mean that if a connection passes validateConnection test then it'll always connect to database correctly. I ask this, as I read following problem here:
http://forum.springsource.org/showthread.php?t=69759
If suppose validateConnection doesn't do the whole 9 yards of ensuring that connection is valid, is there any other option like "testBeforBorrow" in DBCP , which runs a test query to ensure that connection is active before handing it over to webapp?
I'll be grateful if you could provide answers to one ore more queries listed above.
Cheers
You don't say what application server you are using, or how you are configuring the datasource, so I can't give you specific advice.
Connection validation often sounds like a good idea, but you have to be careful with it. For example, we once used it in our JBoss app servers to validate connections in the pool before handing them to the application. This Oracle-proprietary mechanism used the ping() method on the Oracle JDBC driver, which checks that the connection is still alive. It worked fine, but it turns out that ping() executes "select 'x' from dual' on the server, which is a surprisingly expensive query when it's run dozens of times per second.
So the moral is, if you have a high-traffic server, be very careful with connection validation, it can actually bring your database server to its knees.
As for DBCP, that has the ability to validate connections as their borrowed from the pool, as well as returned to the pool, and you can tell it what SQL to send to the database to perform this validation. However, if you're not using DBCP for your connection pooling, then that's not much use to you. C3PO does something similar.
If you're using an app server's data source mechanism, then you have to find out if you can configure that to validate connections, and that's specific to your server.
One last thing: Spring isn't actually involved here. Spring just uses the DataSource that you give it, it's up to the DataSource implementation to perform connection validation.
Configuration of data source "was" as follows:
<bean id="datasource2"
class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName">
<value>org.apache.commons.dbcp.BasicDataSource</value>
</property>
<property name="url">
<value>ORACLE URL</value>
</property>
<property name="username">
<value>user id</value>
</property>
<property name="password">
<value>user password</value>
</property>
<property name="initialSize" value="5"/>
<property name="maxActive" value="20"/>
</bean>
have changed it to:
<bean id="connectionPool1" class="oracle.jdbc.pool.OracleDataSource" destroy-method="close">
<property name="connectionCachingEnabled" value="true" />
<property name="URL">
<value>ORACLE URL</value>
</property>
<property name="user">
<value>user id</value>
</property>
<property name="password">
<value>user password</value>
</property>
<property name="connectionCacheProperties">
<value>
MinLimit:1
MaxLimit:5
InitialLimit:1
ConnectionWaitTimeout:120
InactivityTimeout:180
ValidateConnection:true
</value>
</property>
</bean>

Resources