I trying to access 2 Elasticsearch Server instance through HighLevelRestClient but I couldn't through this array object
HttpHost[] httpHost = new HttpHost(hostName[i...], Integer.parseInt(hostName[i..]), "http");
In hostname i have 2 values in the restHighLevelClient = new RestHighLevelClient( RestClient.builder(httpHost));
I'm also unable to access through second array instance.
Can I have 2 configuration Class if such way a how to create 2 instance of HighLevelRestClient
Or is it any possible way through 2 bean instance if such a way how it is possible
Since we need to have 2 different restHighLevelClient instance.
Kindly let me know in case of more information needed.
Code
import org.apache.http.HttpHost;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestHighLevelClient;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.beans.factory.config.AbstractFactoryBean;
import org.springframework.context.annotation.Configuration;
#Configuration
public class AppElasticSearchConfiguration extends AbstractFactoryBean<RestHighLevelClient> {
private static final Logger LOG = LoggerFactory.getLogger(AppElasticSearchConfiguration.class);
#Value("${application.elasticsearch.host}")
private String hostName[];
private RestHighLevelClient restHighLevelClient;
#Override
public void destroy() {
try {
if (restHighLevelClient != null) {
restHighLevelClient.close();
}
} catch (final Exception e) {
LOG.error("Error closing ElasticSearch client: ", e);
}
}
#Override
public Class<RestHighLevelClient> getObjectType() {
return RestHighLevelClient.class;
}
#Override
public boolean isSingleton() {
return false;
}
#Override
public RestHighLevelClient createInstance() {
return buildClient();
}
private RestHighLevelClient buildClient() {
try {
HttpHost[] httpHost = null;
if(hostName!=null) {
httpHost = new HttpHost[hostName.length];
for (int i = 0; i < httpHost.length; i++) {
httpHost[i] = new HttpHost(hostName[i].split(":")[0],
Integer.parseInt(hostName[i].split(":")[1]), "http");
}
}
restHighLevelClient = new RestHighLevelClient( RestClient.builder(httpHost));
} catch (Exception e) {
LOG.error(e.getMessage());
}
return restHighLevelClient;
}
//public RestHighLevelClient getAppRestHighLevelClient() { return restHighLevelClient; }
}
Hi Just pass the secondary instance in the constructor of HttpHost.
#Bean(destroyMethod = "close")
public RestHighLevelClient buildClient() {
RestClientBuilder builder = RestClient.builder(new HttpHost(hostPrimary, Integer.valueOf(portPrimary), "http"),
new HttpHost(hostSecondary, Integer.valueOf(portSecondary), "http"));
RestHighLevelClient client = new RestHighLevelClient(builder);
LOG.info("RestHighLevelClient has been created with:{}", client);
return client;
}
Related
I am attempting to implement a sticky session load balancer rule in a Zuul proxy service. I am using the code from this example: https://github.com/alejandro-du/vaadin-microservices-demo/blob/master/proxy-server/src/main/java/com/example/StickySessionRule.java
I seem to have everything configured correctly, and the rule is triggering in my debugger, but the call to RequestContext.getCurrentContext().getResponse() always returns null, so the cookie is never found, so the rule never takes effect.
The rest of the Zuul config is working 100%. My traffic is proxied and routed and I can use the app fine, only the sticky session rule is not working.
Is there another step I am missing to get the request wired in to this rule correctly?
My route config:
zuul.routes.appname.path=/appname/**
zuul.routes.appname.sensitiveHeaders=
zuul.routes.appname.stripPrefix=false
zuul.routes.appname.retryable=true
zuul.add-host-header=true
zuul.routes.appname.service-id=APP_NAME
hystrix.command.APP_NAME.execution.isolation.strategy=THREAD
hystrix.command.APP_NAME.execution.isolation.thread.timeoutInMilliseconds=125000
APP_NAME.ribbon.ServerListRefreshInterval=10000
APP_NAME.ribbon.retryableStatusCodes=500
APP_NAME.ribbon.MaxAutoRetries=5
APP_NAME.ribbon.MaxAutoRetriesNextServer=1
APP_NAME.ribbon.OkToRetryOnAllOperations=true
APP_NAME.ribbon.ReadTimeout=5000
APP_NAME.ribbon.ConnectTimeout=5000
APP_NAME.ribbon.EnablePrimeConnections=true
APP_NAME.ribbon.NFLoadBalancerRuleClassName=my.package.name.StickySessionRule
The app:
#EnableZuulProxy
#SpringBootApplication
public class ApplicationGateway {
public static void main(String[] args) {
SpringApplication.run(ApplicationGateway.class, args);
}
#Bean
public LocationRewriteFilter locationRewriteFilter() {
return new LocationRewriteFilter();
}
}
EDIT: As requested, the code:
import com.netflix.loadbalancer.Server;
import com.netflix.loadbalancer.ZoneAvoidanceRule;
import com.netflix.zuul.context.RequestContext;
import javax.servlet.http.Cookie;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.util.Arrays;
import java.util.List;
import java.util.Optional;
/**
* #author Alejandro Duarte.
*/
public class StickySessionRule extends ZoneAvoidanceRule {
public static final String COOKIE_NAME_SUFFIX = "-" + StickySessionRule.class.getSimpleName();
#Override
public Server choose(Object key) {
Optional<Cookie> cookie = getCookie(key);
if (cookie.isPresent()) {
Cookie hash = cookie.get();
List<Server> servers = getLoadBalancer().getReachableServers();
Optional<Server> server = servers.stream()
.filter(s -> s.isAlive() && s.isReadyToServe())
.filter(s -> hash.getValue().equals("" + s.hashCode()))
.findFirst();
if (server.isPresent()) {
return server.get();
}
}
return useNewServer(key);
}
private Server useNewServer(Object key) {
Server server = super.choose(key);
HttpServletResponse response = RequestContext.getCurrentContext().getResponse();
if (response != null) {
String cookieName = getCookieName(server);
Cookie newCookie = new Cookie(cookieName, "" + server.hashCode());
newCookie.setPath("/");
response.addCookie(newCookie);
}
return server;
}
private Optional<Cookie> getCookie(Object key) {
HttpServletRequest request = RequestContext.getCurrentContext().getRequest();
if (request != null) {
Server server = super.choose(key);
String cookieName = getCookieName(server);
Cookie[] cookies = request.getCookies();
if (cookies != null) {
return Arrays.stream(cookies)
.filter(c -> c.getName().equals(cookieName))
.findFirst();
}
}
return Optional.empty();
}
private String getCookieName(Server server) {
return server.getMetaInfo().getAppName() + COOKIE_NAME_SUFFIX;
}
}
I think you are missing a PreFilter, like this:
import com.netflix.zuul.context.RequestContext;
import javax.servlet.http.Cookie;
import javax.servlet.http.HttpServletRequest;
import org.springframework.cloud.netflix.zuul.filters.support.FilterConstants;
public class PreFilter extends com.netflix.zuul.ZuulFilter {
#Override
public Object run() {
RequestContext ctx = RequestContext.getCurrentContext();
RequestContext.getCurrentContext().set(FilterConstants.LOAD_BALANCER_KEY, ctx.getRequest());
return null;
}
#Override
public boolean shouldFilter() {
return true;
}
#Override
public int filterOrder() {
return FilterConstants.SEND_RESPONSE_FILTER_ORDER;
}
#Override
public String filterType() {
return "pre";
}
}
Mark as Bean
#Bean
public PreFilter preFilter() {
return new PreFilter();
}
And use it in your rule
#Override
public Server choose(Object key) {
javax.servlet.http.HttpServletRequest request = (javax.servlet.http.HttpServletRequest) key;
RequestContext not working cause "hystrix.command.APP_NAME.execution.isolation.strategy=THREAD"
I have 2 instances of my application on the same machine (although it could be on different machines as well) with two Tomcat instances with different ports and Apache ActiveMQ is embedded in the application.
I have configured a static network of brokers so that the message from one instance can be consumed by all other instance as well (each instance can be producer and consumer).
servlet:
package com.activemq.servlet;
import java.io.IOException;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.Map;
import javax.jms.JMSException;
import javax.servlet.ServletConfig;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.activemq.ActiveMQStartup;
import com.activemq.MQPublisher;
import com.activemq.SendMsg;
import com.activemq.SendMsgToAllInstance;
import com.activemq.TestPublisher;
/**
* Servlet implementation class ActiveMQStartUpServlet
*/
#WebServlet(value = "/activeMQStartUpServlet", loadOnStartup = 1)
public class ActiveMQStartUpServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private ActiveMQStartup mqStartup = null;
private static final Map pooledPublishers = new HashMap();
#Override
public void init(ServletConfig config) throws ServletException {
System.out.println("starting servelt--------------");
super.init(config);
//Apache Active MQ Startup
mqStartup = new ActiveMQStartup();
mqStartup.startBrokerService();
}
#Override
protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
System.out.println(req.getParameter("distributedMsg"));
String mqConfig = null;
String distributedMsg = req.getParameter("distributedMsg");
String simpleMsg = req.getParameter("simpleMsg");
if (distributedMsg != null && !distributedMsg.equals(""))
mqConfig = "distributedMsg";
else if (simpleMsg != null && !simpleMsg.equals(""))
mqConfig = "simpleMsg";
MQPublisher publisher = acquirePublisher(mqConfig);
try {
publisher.publish(mqConfig);
} catch (JMSException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
releasePublisher(publisher);
}
}
#SuppressWarnings("unchecked")
private void releasePublisher(MQPublisher publisher) {
if (publisher == null) return;
#SuppressWarnings("rawtypes")
LinkedList publishers;
TestPublisher poolablePublisher = (TestPublisher)publisher;
publishers = getPooledPublishers(poolablePublisher.getConfigurationName());
synchronized (publishers) {
publishers.addLast(poolablePublisher);
}
}
private MQPublisher acquirePublisher(String mqConfig) {
LinkedList publishers = getPooledPublishers(mqConfig);
MQPublisher publisher = getMQPubliser(publishers);
if (publisher != null) return publisher;
try {
if (mqConfig.equals("distributedMsg"))
return new TestPublisher(MQConfiguration.getConfiguration("distributedMsg"), new SendMsgToAllInstance());
else
return new TestPublisher(MQConfiguration.getConfiguration("simpleMsg"), new SendMsg());
}catch(Exception e){
e.printStackTrace();
}
return null;
}
private LinkedList getPooledPublishers(String mqConfig) {
LinkedList publishers = null;
publishers = (LinkedList) pooledPublishers.get(mqConfig);
if (publishers == null) {
synchronized(pooledPublishers) {
publishers = (LinkedList) pooledPublishers.get(mqConfig);
if (publishers == null) {
publishers = new LinkedList();
pooledPublishers.put(mqConfig, publishers);
}
}
}
return publishers;
}
private MQPublisher getMQPubliser(LinkedList publishers) {
synchronized (publishers) {
while (!publishers.isEmpty()) {
TestPublisher publisher = (TestPublisher)publishers.removeFirst();
return publisher;
}
}
return null;
}
}
Configuration:
package com.activemq.servlet;
import java.util.HashMap;
import java.util.Map;
import javax.jms.JMSException;
import javax.jms.Topic;
import javax.jms.TopicConnection;
import javax.jms.TopicConnectionFactory;
import javax.jms.TopicSession;
import org.apache.activemq.ActiveMQConnectionFactory;
import com.activemq.ActiveMQContext;
public class MQConfiguration {
private static final Map configurations = new HashMap();
private String mqConfig;
private String topicName;
private TopicConnection topicConnection = null;
private MQConfiguration(String mqConfig, String string, String string2) {
this.mqConfig = mqConfig;
try {
String topicFactoryConName = ActiveMQContext.getProperty(mqConfig);
this.topicName = (mqConfig.equals("distributedMsg") ? ActiveMQContext.getProperty("distributedTopic"):ActiveMQContext.getProperty("normalTopic"));
TopicConnectionFactory factory = (ActiveMQConnectionFactory) ActiveMQContext.getContext()
.lookup(topicFactoryConName);
this.topicConnection = factory.createTopicConnection();
this.topicConnection.start();
} catch (Exception e) {
System.out.println("error: " + e);
}
}
public static MQConfiguration getConfiguration(String mqConfig) {
if (mqConfig == null || "".equals(mqConfig)) {
throw new IllegalArgumentException("mqConfig is null or empty");
}
MQConfiguration config = null;
if (config != null) {
return config;
}
synchronized (configurations) {
config = (MQConfiguration) configurations.get(mqConfig);
if (config == null) {
config = new MQConfiguration(mqConfig, "userName", "userPassword");
}
configurations.put(mqConfig, config);
}
return config;
}
public String getMqConfig() {
return this.mqConfig;
}
public TopicSession createTopicSession(boolean isTransacted, int autoAcknowledge) throws JMSException {
if (this.topicConnection == null) {
IllegalStateException ise = new IllegalStateException("topic connection not configured");
throw ise;
}
return this.topicConnection.createTopicSession(isTransacted, autoAcknowledge);
}
public Topic getTopic() {
try {
return (Topic) ActiveMQContext.getContext().lookup(this.topicName);
} catch (Exception e) {
e.getMessage();
}
return null;
}
}
publisher:
package com.activemq;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageConsumer;
import javax.jms.MessageListener;
import javax.jms.Session;
import javax.jms.TextMessage;
import javax.jms.Topic;
import javax.jms.TopicPublisher;
import javax.jms.TopicSession;
import com.activemq.servlet.MQConfiguration;
public class TestPublisher implements MQPublisher {
private final String configurationName;
private TopicSession topicSession = null;
private TopicPublisher topicPublisher = null;
public TestPublisher(MQConfiguration config, Object messageListener) throws JMSException {
if (config == null) {
throw new IllegalArgumentException("config == null");
}
Topic topic = config.getTopic();
this.configurationName = config.getMqConfig();
this.topicSession = config.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);
this.topicPublisher = this.topicSession.createPublisher(topic);
MessageConsumer msgConsumer = this.topicSession.createConsumer(topic);
msgConsumer.setMessageListener((MessageListener) messageListener);
}
#Override
public void publish(String msg) throws JMSException {
this.topicPublisher.publish(createMessage(msg, this.topicSession));
}
private Message createMessage(String msg, Session session) throws JMSException {
TextMessage message = session.createTextMessage(msg);
return message;
}
public String getConfigurationName() {
return this.configurationName;
}
}
Consumer:
package com.activemq;
import javax.jms.Message;
import javax.jms.MessageListener;
public class SendMsgToAllInstance implements MessageListener {
#Override
public void onMessage(Message arg0) {
System.out.println("distributed message-------------");
// We have call to dao layer to to fetch some data and cached it
}
}
JNDI:activemq-jndi.properties
# JNDI properties file to setup the JNDI server within ActiveMQ
#
# Default JNDI properties settings
#
java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
java.naming.provider.url=tcp://localhost:61616
activemq.network.connector=static:(tcp://localhost:61620)
#activemq.network.connector=broker:(tcp://localhost:61619,network:static:tcp://localhost:61620)?persistent=false&useJmx=true
activemq.data.directory=data61619
activemq.jmx.port=1099
#
# Set the connection factory name(s) as well as the destination names. The connection factory name(s)
# as well as the second part (after the dot) of the left hand side of the destination definition
# must be used in the JNDI lookups.
#
connectionFactoryNames = distributedMsgFactory,simpleMsgFactory
topic.jms/distributedTopic=distributedTopic
topic.jms/normalTopic=normalTopic
distributedMsg=distributedMsgFactory
simpleMsg=simpleMsgFactory
distributedTopic=jms/distributedTopic
normalTopic=jms/normalTopic
ActiveMQStartup:
package com.activemq;
import java.net.URI;
import org.apache.activemq.broker.BrokerPlugin;
import org.apache.activemq.broker.BrokerService;
import org.apache.activemq.broker.TransportConnector;
import org.apache.activemq.broker.jmx.ManagementContext;
import org.apache.activemq.network.NetworkConnector;
import org.apache.activemq.security.JaasAuthenticationPlugin;
public class ActiveMQStartup {
private final String bindAddress;
private final String dataDirectory;
private BrokerService broker = new BrokerService();
protected final int numRestarts = 3;
protected final int networkTTL = 2;
protected final int consumerTTL = 2;
protected final boolean dynamicOnly = true;
protected final String networkBroker;
protected final String jmxPort;
public ActiveMQStartup() {
ActiveMQContext context = new ActiveMQContext();
context.loadJndiProperties();
bindAddress = ActiveMQContext.getProperty("java.naming.provider.url");
dataDirectory = ActiveMQContext.getProperty("activemq.data.directory");
networkBroker = ActiveMQContext.getProperty("activemq.network.connector");
jmxPort = ActiveMQContext.getProperty("activemq.jmx.port");
}
// Start activemq broker service
public void startBrokerService() {
try {
broker.setDataDirectory("../" + dataDirectory);
broker.setBrokerName(dataDirectory);
broker.setUseShutdownHook(true);
TransportConnector connector = new TransportConnector();
connector.setUri(new URI(bindAddress));
//broker.setPlugins(new BrokerPlugin[]{new JaasAuthenticationPlugin()});
ManagementContext mgContext = new ManagementContext();
if (networkBroker != null && !networkBroker.isEmpty()) {
NetworkConnector networkConnector = broker.addNetworkConnector(networkBroker);
networkConnector.setName(dataDirectory);
mgContext.setConnectorPort(Integer.parseInt(jmxPort));
broker.setManagementContext(mgContext);
configureNetworkConnector(networkConnector);
}
broker.setNetworkConnectorStartAsync(true);
broker.addConnector(connector);
broker.start();
} catch (Exception e) {
System.out.println("Failed to start Apache MQ Broker : " + e);
}
}
private void configureNetworkConnector(NetworkConnector networkConnector) {
networkConnector.setDuplex(true);
networkConnector.setNetworkTTL(networkTTL);
networkConnector.setDynamicOnly(dynamicOnly);
networkConnector.setConsumerTTL(consumerTTL);
//networkConnector.setStaticBridge(true);
}
// Stop broker service
public void stopBrokerService() {
try {
broker.stop();
} catch (Exception e) {
System.out.println("Unable to stop the ApacheMQ Broker service " + e);
}
}
}
I am starting the tomcat instance one by one and seeing the network connection between the broker is getting established.
When I am sending messge from instance1 or instance2(first time) it is consuming on that instance only, but when I am sending message from the second instance it is consumed by both;
Code in git: https://github.com/AratRana/ApacheActiveMQ
Could you point me where I am wrong?
Finally, I am able to do it. When I started the consumer during server startup then I am able to see the message consumer in all instances. So to achieve this the consumers needs to be started before publishing any message.
We are just starting on AWS and have requirement to use AWS ElasticCache with Redis-jedis with Spring.
Spring-data-redis 1.8.8.RELEASE
aws-java-sdk 1.11.228
Spring 4.2.9.RELEASE
jedis 2.9.0
I was able to connect and cache data to local redis with below code. I have tried making code changes as https://github.com/fishercoder1534/AmazonElastiCacheExample/tree/master/src/main/java , but not been successful. Would really appreciate some guidance and help with some sample code.
AWS ElasticCache is currently configured as option 1, but would also need to go to option 2 soon.
1. Non-replicated cluster - Redis cluster-disabled with no replicas
2. Replicated cluster - Redis cluster-enabled and Redis cluster disabled with read replicas.
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.cache.CacheManager;
import org.springframework.cache.annotation.CachingConfigurerSupport;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;
import org.springframework.data.redis.cache.RedisCacheManager;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.connection.jedis.JedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.JdkSerializationRedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;
import redis.clients.jedis.Jedis;
import org.springframework.cache.interceptor.KeyGenerator;
import java.lang.reflect.Method;
import java.util.List;
#Configuration
#EnableCaching
// #PropertySource("classpath:/redis.properties")
public class CacheConfig extends CachingConfigurerSupport {
// private #Value("${redis.host}") String redisHost;
// private #Value("${redis.port}") int redisPort;
//#Bean
public KeyGenerator keyGenerator() {
return new KeyGenerator() {
#Override
public Object generate(Object o, Method method, Object... objects) {
// This will generate a unique key of the class name, the method name, and all method parameters appended.
StringBuilder sb = new StringBuilder();
sb.append(o.getClass().getName());
sb.append(method.getName());
for (Object obj : objects) {
sb.append(obj.toString());
}
return sb.toString();
}
};
}
#Bean
public JedisConnectionFactory redisConnectionFactory() {
JedisConnectionFactory redisConnectionFactory = new JedisConnectionFactory();
// Defaults for redis running on Local Docker
redisConnectionFactory.setHostName("192.168.99.100");
redisConnectionFactory.setPort(6379);
return redisConnectionFactory;
}
#Bean
public RedisTemplate<String, String> redisTemplate(RedisConnectionFactory cf) {
RedisTemplate<String, String> redisTemplate = new RedisTemplate<String, String>();
redisTemplate.setConnectionFactory(cf);
redisTemplate.setDefaultSerializer(new JdkSerializationRedisSerializer());
return redisTemplate;
}
#Bean
public CacheManager cacheManager(RedisTemplate<String, String> redisTemplate) {
RedisCacheManager cacheManager = new RedisCacheManager(redisTemplate);
// Number of seconds before expiration. Defaults to unlimited (0)
cacheManager.setDefaultExpiration(1200);
cacheManager.getCacheNames().forEach(cacheM-> {System.out.println(cacheM);});
return cacheManager;
}
}
Implemented Caching with AWS Elastic Cache + Lettuce (Redis java Client) + spring-data-redis. 3 master with 2 slaves and SSL using spring #Cachable and #CacheEvict annotation. Please provide any inputs if you see any issue or it can be done in a better way.
Spring 4.3.12.RELEASE
Spring-data-redis 1.8.8.RELEASE
aws-java-sdk 1.11.228
Lettuce (Redis java Client) 4.4.2.Final
#Configuration
#EnableCaching
public class CacheConfig extends CachingConfigurerSupport {
long expirationDate = 1200;
static AWSCredentials credentials = null;
static {
try {
//credentials = new ProfileCredentialsProvider("default").getCredentials();
credentials = new SystemPropertiesCredentialsProvider().getCredentials();
} catch (Exception e) {
System.out.println("Got exception..........");
throw new AmazonClientException("Cannot load the credentials from the credential profiles file. "
+ "Please make sure that your credentials file is at the correct "
+ "location (/Users/USERNAME/.aws/credentials), and is in valid format.", e);
}
}
#Bean
public LettuceConnectionFactory redisConnectionFactory() {
AmazonElastiCache elasticacheClient = AmazonElastiCacheClientBuilder.standard().withCredentials(new AWSStaticCredentialsProvider(credentials)).withRegion(Regions.US_EAST_1).build();
DescribeCacheClustersRequest dccRequest = new DescribeCacheClustersRequest();
dccRequest.setShowCacheNodeInfo(true);
DescribeCacheClustersResult clusterResult = elasticacheClient.describeCacheClusters(dccRequest);
List<CacheCluster> cacheClusters = clusterResult.getCacheClusters();
List<String> clusterNodes = new ArrayList <String> ();
try {
for (CacheCluster cacheCluster : cacheClusters) {
for (CacheNode cacheNode : cacheCluster.getCacheNodes()) {
String addr = cacheNode.getEndpoint().getAddress();
int port = cacheNode.getEndpoint().getPort();
String url = addr + ":" + port;
if(<CLUSTER NAME>.equalsIgnoreCase(cacheCluster.getReplicationGroupId()))
clusterNodes.add(url);
}
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
LettuceConnectionFactory redisConnectionFactory = new LettuceConnectionFactory(new RedisClusterConfiguration(clusterNodes));
redisConnectionFactory.setUseSsl(true);
redisConnectionFactory.afterPropertiesSet();
return redisConnectionFactory;
}
#Bean
public RedisTemplate<String, String> redisTemplate(RedisConnectionFactory cf) {
RedisTemplate<String, String> redisTemplate = new RedisTemplate<String, String>();
redisTemplate.setConnectionFactory(cf);
redisTemplate.setDefaultSerializer(new JdkSerializationRedisSerializer());
return redisTemplate;
}
#Bean
public CacheManager cacheManager(RedisTemplate<String, String> redisTemplate) {
RedisCacheManager cacheManager = new RedisCacheManager(redisTemplate);
// Number of seconds before expiration. Defaults to unlimited (0)
cacheManager.setDefaultExpiration(expirationDate);
cacheManager.setLoadRemoteCachesOnStartup(true);
return cacheManager;
}
}
Is it possible to use CachingHttpAsyncClient with AsyncRestTemplate? HttpComponentsAsyncClientHttpRequestFactory expects a CloseableHttpAsyncClient but CachingHttpAsyncClient does not extend it.
This is known as issue SPR-15664 for versions up to 4.3.9 and 5.0.RC2 - fixed in 4.3.10 and 5.0.RC3. The only way around is is creating a custom AsyncClientHttpRequestFactory implementation that is based on the existing HttpComponentsAsyncClientHttpRequestFactory:
// package required for HttpComponentsAsyncClientHttpRequest visibility
package org.springframework.http.client;
import java.io.IOException;
import java.net.URI;
import org.apache.http.client.config.RequestConfig;
import org.apache.http.client.methods.Configurable;
import org.apache.http.client.methods.HttpUriRequest;
import org.apache.http.client.protocol.HttpClientContext;
import org.apache.http.impl.client.cache.CacheConfig;
import org.apache.http.impl.client.cache.CachingHttpAsyncClient;
import org.apache.http.impl.nio.client.CloseableHttpAsyncClient;
import org.apache.http.impl.nio.client.HttpAsyncClients;
import org.apache.http.protocol.HttpContext;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.http.HttpMethod;
import org.springframework.util.Assert;
// TODO add support for other CachingHttpAsyncClient otpions, e.g. HttpCacheStorage
public class HttpComponentsCachingAsyncClientHttpRequestFactory extends HttpComponentsClientHttpRequestFactory implements AsyncClientHttpRequestFactory, InitializingBean {
private final CloseableHttpAsyncClient wrappedHttpAsyncClient;
private final CachingHttpAsyncClient cachingHttpAsyncClient;
public HttpComponentsCachingAsyncClientHttpRequestFactory() {
this(HttpAsyncClients.createDefault(), CacheConfig.DEFAULT);
}
public HttpComponentsCachingAsyncClientHttpRequestFactory(final CacheConfig config) {
this(HttpAsyncClients.createDefault(), config);
}
public HttpComponentsCachingAsyncClientHttpRequestFactory(final CloseableHttpAsyncClient client) {
this(client, CacheConfig.DEFAULT);
}
public HttpComponentsCachingAsyncClientHttpRequestFactory(final CloseableHttpAsyncClient client, final CacheConfig config) {
Assert.notNull(client, "HttpAsyncClient must not be null");
wrappedHttpAsyncClient = client;
cachingHttpAsyncClient = new CachingHttpAsyncClient(client, config);
}
#Override
public void afterPropertiesSet() {
startAsyncClient();
}
private void startAsyncClient() {
if (!wrappedHttpAsyncClient.isRunning()) {
wrappedHttpAsyncClient.start();
}
}
#Override
public ClientHttpRequest createRequest(final URI uri, final HttpMethod httpMethod) throws IOException {
throw new IllegalStateException("Synchronous execution not supported");
}
#Override
public AsyncClientHttpRequest createAsyncRequest(final URI uri, final HttpMethod httpMethod) throws IOException {
startAsyncClient();
final HttpUriRequest httpRequest = createHttpUriRequest(httpMethod, uri);
postProcessHttpRequest(httpRequest);
HttpContext context = createHttpContext(httpMethod, uri);
if (context == null) {
context = HttpClientContext.create();
}
// Request configuration not set in the context
if (context.getAttribute(HttpClientContext.REQUEST_CONFIG) == null) {
// Use request configuration given by the user, when available
RequestConfig config = null;
if (httpRequest instanceof Configurable) {
config = ((Configurable) httpRequest).getConfig();
}
if (config == null) {
config = createRequestConfig(cachingHttpAsyncClient);
}
if (config != null) {
context.setAttribute(HttpClientContext.REQUEST_CONFIG, config);
}
}
return new HttpComponentsAsyncClientHttpRequest(cachingHttpAsyncClient, httpRequest, context);
}
#Override
public void destroy() throws Exception {
try {
super.destroy();
} finally {
wrappedHttpAsyncClient.close();
}
}
}
all.
i am a newbie of lucene, and i'm using spring-mvc (3.2.5.RELEASE) and lucene(4.6.0).
both are newest version currently.
how can i use NEAR REAL TIME search?
i write this code to get instance of IndexWriter (sington)
package com.github.yingzhuo.mycar.search;
import java.io.IOException;
import org.apache.commons.io.FileUtils;
import org.apache.commons.io.IOUtils;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.util.Version;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.FactoryBean;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.core.io.Resource;
import org.springframework.util.Assert;
import org.wltea.analyzer.lucene.IKAnalyzer;
public class IndexWriterFactoryBean implements FactoryBean<IndexWriter>, InitializingBean, DisposableBean {
private static final Logger LOGGER = LoggerFactory.getLogger(IndexWriterFactoryBean.class);
private Analyzer analyzer = new IKAnalyzer(false);
private Resource indexDirectory = null;
private IndexWriter indexWriter = null;
private Directory directory = null;
public IndexWriterFactoryBean() {
if (indexDirectory != null) {
try {
if (! indexDirectory.getFile().exists()) {
FileUtils.forceMkdir(indexDirectory.getFile());
}
} catch (IOException e) {
LOGGER.warn(e.getMessage(), e);
}
}
}
#Override
public IndexWriter getObject() throws Exception {
return indexWriter;
}
#Override
public Class<?> getObjectType() {
return IndexWriter.class;
}
#Override
public boolean isSingleton() {
return true;
}
#Override
public void afterPropertiesSet() throws Exception {
Assert.notNull(analyzer, "property 'analyzer' must be set.");
Assert.notNull(indexDirectory, "property 'indexDirectory' must be set.");
directory = FSDirectory.open(indexDirectory.getFile());
indexWriter = new IndexWriter(directory, new IndexWriterConfig(Version.LUCENE_46, analyzer));
}
#Override
public void destroy() throws Exception {
IOUtils.closeQuietly(indexWriter);
IOUtils.closeQuietly(directory);
IOUtils.closeQuietly(analyzer);
}
// getter & setter
// ------------------------------------------------------------------------------------------
public void setAnalyzer(Analyzer analyzer) {
this.analyzer = analyzer;
}
public void setIndexDirectory(Resource indexDirectory) {
this.indexDirectory = indexDirectory;
}
}
and this utility to get DirectoryReader by static method.
import java.io.IOException;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexWriter;
import com.github.yingzhuo.mycar.config.SpringUtils;
public final class DirectoryReaderHolder {
private static DirectoryReader HOLDER = null;
public synchronized static DirectoryReader get() {
if (HOLDER == null) {
try {
HOLDER = DirectoryReader.open(SpringUtils.getBean(IndexWriter.class), true);
} catch (IOException e) {
throw new IllegalStateException(e);
}
}
return HOLDER;
}
public static synchronized void set(DirectoryReader directoryReader) {
if (directoryReader == null) {
throw new NullPointerException();
} else {
HOLDER = directoryReader;
}
}
}
and this bean to inject into my spring-mvc controller. In 'create' method, i am trying to get a new reader before i create a IndexSearcher, but HOW SHOULD I HANDLE THE OLD READER ?
can i close it directly? if other threads are still using the old reader, very bad thing will happen ?
package com.github.yingzhuo.mycar.search;
import java.io.IOException;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.search.IndexSearcher;
public class IndexSearcherManager {
public IndexSearcher create() {
try {
DirectoryReader oldReader = DirectoryReaderHolder.get();
DirectoryReader newReader = DirectoryReader.openIfChanged(oldReader);
if (newReader != null) {
oldReader.close(); // AM I RIGHT ???
oldReader = newReader;
}
return new IndexSearcher(oldReader);
} catch (IOException e) {
throw new IllegalStateException(e);
}
}
}
Any suggestions? thank you.