use spring boot data redis Connect to the redis cluster problem - spring-boot

I used spring boot data redis to connect to the redis cluster, using version 2.1.3 The configuration is as follows:
#Bean
#Primary
public RedisConnectionFactory myLettuceConnectionFactory(GenericObjectPoolConfig poolConfig) {
RedisClusterConfiguration redisClusterConfiguration = new RedisClusterConfiguration();
final List<String> nodeList = redisProperties.getCluster().getNodes();
Set<RedisNode> nodes = new HashSet<RedisNode>();
for (String ipPort : nodeList) {
String[] ipAndPort = ipPort.split(":");
nodes.add(new RedisNode(ipAndPort[0].trim(), Integer.valueOf(ipAndPort[1])));
}
redisClusterConfiguration.setPassword(RedisPassword.of(redisProperties.getPassword()));
redisClusterConfiguration.setClusterNodes(nodes);
redisClusterConfiguration.setMaxRedirects(redisProperties.getCluster().getMaxRedirects());
LettuceClientConfiguration clientConfig = LettucePoolingClientConfiguration.builder()
.commandTimeout(redisProperties.getTimeout())
.poolConfig(poolConfig)
.build();
RedisClusterClient clusterClient ;
LettuceConnectionFactory factory = new LettuceConnectionFactory(redisClusterConfiguration,clientConfig);
return factory;
}
However, during the operation, a WARN exception message will always be received as follows:
Well, this seems to be a problem with lettuce, How to map remote host & port to localhost using Lettuce,but I don't know how to use it in spring boot data redis. Any solution is welcome, thank you

I've got the answer, so let's define a ClinentRourse like this:
MappingSocketAddressResolver resolver = MappingSocketAddressResolver.create(DnsResolvers.UNRESOLVED ,
hostAndPort -> {
if(hostAndPort.getHostText().startsWith("172.31")){
return HostAndPort.of(ipStr, hostAndPort.getPort());
}
return hostAndPort;
});
ClientResources clientResources = ClientResources.builder()
.socketAddressResolver(resolver)
.build();
Then through LettuceClientConfiguration.clientResources method set in, the normal work of the lettuce.

Related

Leverage Spring boot Redis Auto configure logic for RedisConnectionFactory

Spring boot auto configures RedisConnectionFactory if spring-data-redis exists on classpath and RedisConnectionFactory is initialized in LettuceConnectionConfiguration if Lettuce-core available on classpath.
I've only one Redis store as of now, so leveraging Spring boot auto configuration.
Now I'm adding two redis stores, one redis store used as default and other is used when specified with parameter cacheManager = "secondayCacheManager" in #Cacheable annotation so, application should've capability to cache/cache-get on both redis stores.
To configure both Redis Stores, we've to configure both the primary and secondary RedisConnectionFactory and cacheManager using custom configuration. (because spring doesn't auto configure RedisConnectionFactory if it already exists in any custom configuration)
Now the above is custom configuration and missing lot of logic that is happening while configuring RedisConnectionFactory in LettuceConnectionConfiguration.
Auto configure logic for LettuceConnectionConfiguration is package private so, cannot be called directly from custom configuration.
We would like to leverage the auto configure logic in
LettuceConnectionConfiguration while configuring the custom
RedisConnectionFactory for both primary and secondary redis caches.
Is there a way to achieve this?
Reason being we would like keep the redis connection configurations as it is done by spring boot auto configure.
Currently using below code to configure both the primary and secondary RedisConnectionFactory with Pool configuration and some code copy pasted from LettuceConnectionConfiguration class.
public static LettuceConnectionFactory buildLettuceConnectionFactory(RedisProperties properties, ClientResources clientResources) {
RedisStandaloneConfiguration standaloneConfiguration = new RedisStandaloneConfiguration(properties.getHost(), properties.getPort());
standaloneConfiguration.setDatabase(properties.getDatabase());
if (properties.getPassword() != null) {
standaloneConfiguration.setPassword(RedisPassword.of(properties.getPassword()));
}
if (properties.getUsername() != null) {
standaloneConfiguration.setUsername(properties.getUsername());
}
LettucePoolingClientConfiguration poolingClientConfiguration = LettucePoolingClientConfiguration.builder()
.poolConfig(buildGenericObjectPoolConfig(properties))
.shutdownTimeout(properties.getLettuce().getShutdownTimeout())
.clientOptions(createClientOptions(properties))
.clientResources(clientResources)
.build();
LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(
standaloneConfiguration, poolingClientConfiguration);
lettuceConnectionFactory.afterPropertiesSet();
return lettuceConnectionFactory;
}
private static GenericObjectPoolConfig buildGenericObjectPoolConfig(RedisProperties properties) {
RedisProperties.Pool pool = properties.getLettuce().getPool();
GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
if (Objects.nonNull(pool)) {
poolConfig.setMaxIdle(pool.getMaxIdle());
poolConfig.setMinIdle(pool.getMinIdle());
poolConfig.setMaxTotal(pool.getMaxActive());
poolConfig.setMaxWaitMillis(pool.getMaxWait().toMillis());
}
return poolConfig;
}
private static ClientOptions createClientOptions(RedisProperties properties) {
ClientOptions.Builder builder = initializeClientOptionsBuilder(properties);
Duration connectTimeout = properties.getConnectTimeout();
if (connectTimeout != null) {
builder.socketOptions(SocketOptions.builder().connectTimeout(connectTimeout).build());
}
return builder.timeoutOptions(TimeoutOptions.enabled()).build();
}
private static ClientOptions.Builder initializeClientOptionsBuilder(RedisProperties properties) {
if (properties.getCluster() != null) {
ClusterClientOptions.Builder builder = ClusterClientOptions.builder();
Refresh refreshProperties = properties.getLettuce().getCluster().getRefresh();
Builder refreshBuilder = ClusterTopologyRefreshOptions.builder()
.dynamicRefreshSources(refreshProperties.isDynamicRefreshSources());
if (refreshProperties.getPeriod() != null) {
refreshBuilder.enablePeriodicRefresh(refreshProperties.getPeriod());
}
if (refreshProperties.isAdaptive()) {
refreshBuilder.enableAllAdaptiveRefreshTriggers();
}
return builder.topologyRefreshOptions(refreshBuilder.build());
}
return ClientOptions.builder();
}

ListenerExecutionFailedException Nullpointer when trying to index kafka payload through new ElasticSearch Java API Client

I'm migrating from the HLRC to the new client, things were smooth but for some reason I cannot index a specific class/document. Here is my client implementation and index request:
#Configuration
public class ClientConfiguration{
#Autowired
private InternalProperties conf;
public ElasticsearchClient sslClient(){
CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials(conf.getElasticsearchUser(), conf.getElasticsearchPassword()));
HttpHost httpHost = new HttpHost(conf.getElasticsearchAddress(), conf.getElasticsearchPort(), "https");
RestClientBuilder restClientBuilder = RestClient.builder(httpHost);
try {
SSLContext sslContext = SSLContexts.custom().loadTrustMaterial(null, (x509Certificates, s) -> true).build();
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
#Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setSSLContext(sslContext)
.setDefaultCredentialsProvider(credentialsProvider);
}
});
} catch (Exception e) {
e.printStackTrace();
}
RestClient restClient=restClientBuilder.build();
ElasticsearchTransport transport = new RestClientTransport(
restClient, new JacksonJsonpMapper());
ElasticsearchClient client = new ElasticsearchClient(transport);
return client;
}
}
#Service
public class ThisDtoIndexClass extends ConfigAndProperties{
public ThisDtoIndexClass() {
}
//client is declared in the class it's extending from
public ThisDtoIndexClass(#Autowired ClientConfiguration esClient) {
this.client = esClient.sslClient();
}
#KafkaListener(topics = "esTopic")
public void in(#Payload(required = false) customDto doc)
throws ThisDtoIndexClassException, ElasticsearchException, IOException {
if(doc!= null && doc.getId() != null) {
IndexRequest.Builder<customDto > indexReqBuilder = new IndexRequest.Builder<>();
indexReqBuilder.index("index-for-this-Dto");
indexReqBuilder.id(doc.getId());
indexReqBuilder.document(doc);
IndexResponse response = client.index(indexReqBuilder.build());
} else {
throw new ThisDtoIndexClassException("document is null");
}
}
}
This is all done in spring boot (v2.6.8) with ES 7.17.3. According to the debug, the payload is NOT null! It even fetches the id correctly while stepping through. For some reason, it throws me a org.springframework.kafka.listener.ListenerExecutionFailedException: in the last line (during the .build?). Nothing gets indexed, but the response comes back 200. I'm lost on where I should be looking. I have a different class that also writes to a different index, also getting a payload from kafka directly (all seperate consumers). That one functions just fine.
I suspect it has something to do with the way my client is set up and/or the kafka. Please point me in the right direction.
I solved it by deleting the default constructor. If I put it back it overwrites the extended constructor (or straight up doesn't acknowledge the extended constructor), so my client was always null. The error message it gave me was extremely misleading since it actually wasn't the Kafka's fault!
Removing the default constructor completely initializes the correct constructor and I was able to index again. I assume this was a spring boot loading related "issue".

spring boot with redis

I worked with spring boot and redis to caching.I can cache my data that fetch from database(oracle) use #Cacheable(key = "{#input,#page,#size}",value = "on_test").
when i try to fetch data from key("on_test::0,0,10") with redisTemplate the result is 0
why??
Redis Config:
#Configuration
public class RedisConfig {
#Bean
JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration("localhost", 6379);
redisStandaloneConfiguration.setPassword(RedisPassword.of("admin#123"));
return new JedisConnectionFactory(redisStandaloneConfiguration);
}
#Bean
public RedisTemplate<String,Objects> redisTemplate() {
RedisTemplate<String,Objects> template = new RedisTemplate<>();
template.setStringSerializer(new StringRedisSerializer());
template.setValueSerializer(new StringRedisSerializer());
template.setConnectionFactory(jedisConnectionFactory());
return template;
}
//service
#Override
#Cacheable(key = "{#input,#page,#size}",value = "on_test")
public Page<?> getAllByZikaConfirmedClinicIs(Integer input,int page,int size) {
try {
Pageable newPage = PageRequest.of(page, size);
String fromCache = controlledCacheService.getFromCache();
if (fromCache == null && input!=null) {
log.info("cache is empty lets initials it!!!");
Page<DataSet> all = dataSetRepository.getAllByZikaConfirmedClinicIs(input,newPage);
List<DataSet> d = redisTemplate.opsForHash().values("on_test::0,0,10");
System.out.print(d);
return all;
}
return null;
The whole point of using #Cacheable is that you don't need to be using RedisTemplate directly. You just need to call getAllByZikaConfirmedClinicIs() (from outside of the class it is defined in) and Spring will automatically check first if a cached result is available and return that instead of calling the function.
If that's not working, have you annotated one of your Spring Boot configuration classes with #EnableCaching to enable caching?
You might also need to set spring.cache.type=REDIS in application.properties, or spring.cache.type: REDIS in application.yml to ensure Spring is using Redis and not some other cache provider.

Spring Integration - Dynamic MailReceiver configuration

I'm pretty new to spring-integration anyway I'm using it in order to receive mails and elaborate them.
I used this spring configuration class:
#Configuration
#EnableIntegration
#PropertySource(value = { "classpath:configuration.properties" }, encoding = "UTF-8", ignoreResourceNotFound = false)
public class MailReceiverConfiguration {
private static final Log logger = LogFactory.getLog(MailReceiverConfiguration.class);
#Autowired
private EmailTransformerService emailTransformerService;
// Configurazione AE
#Bean
public MessageChannel inboundChannelAE() {
return new DirectChannel();
}
#Bean(name= {"aeProps"})
public Properties aeProps() {
Properties javaMailPropertiesAE = new Properties();
javaMailPropertiesAE.put("mail.store.protocol", "imap");
javaMailPropertiesAE.put("mail.debug", Boolean.TRUE);
javaMailPropertiesAE.put("mail.auth.debug", Boolean.TRUE);
javaMailPropertiesAE.put("mail.smtp.socketFactory.fallback", "false");
javaMailPropertiesAE.put("mail.imap.socketFactory.class", "javax.net.ssl.SSLSocketFactory");
return javaMailPropertiesAE;
}
#Bean(name="mailReceiverAE")
public MailReceiver mailReceiverAE(#Autowired MailConfigurationBean mcb, #Autowired #Qualifier("aeProps") Properties javaMailPropertiesAE) throws Exception {
return ConfigurationUtil.getMailReceiver("imap://USERNAME:PASSWORD#MAILSERVER:PORT/INBOX", new BigDecimal(2), javaMailPropertiesAE);
}
#Bean
#InboundChannelAdapter( autoStartup = "true",
channel = "inboundChannelAE",
poller = {#Poller(fixedRate = "${fixed.rate.ae}",
maxMessagesPerPoll = "${max.messages.per.poll.ae}") })
public MailReceivingMessageSource pollForEmailAE(#Autowired MailReceiver mailReceiverAE) {
MailReceivingMessageSource mrms = new MailReceivingMessageSource(mailReceiverAE);
return mrms;
}
#Transformer(inputChannel = "inboundChannelAE", outputChannel = "transformerChannelAE")
public MessageBean transformitAE( MimeMessage mailMessage ) throws Exception {
// amministratore email inbox
MessageBean messageBean = emailTransformerService.transformit(mailMessage);
return messageBean;
}
#Splitter(inputChannel = "transformerChannelAE", outputChannel = "nullChannel")
public List<Message<?>> splitIntoMessagesAE(final MessageBean mb) {
final List<Message<?>> messages = new ArrayList<Message<?>>();
for (EmailFragment emailFragment : mb.getEmailFragments()) {
Message<?> message = MessageBuilder.withPayload(emailFragment.getData())
.setHeader(FileHeaders.FILENAME, emailFragment.getFilename())
.setHeader("directory", emailFragment.getDirectory()).build();
messages.add(message);
}
return messages;
}
}
So far so good.... I start my micro-service and there is this component listening on the specified mail server and mails are downloaded.
Now I have this requirement: mail server configuration (I mean the string "imap://USERNAME:PASSWORD#MAILSERVER:PORT/INBOX") must be taken from a database and it can be configurable. In any time a system administrator can change it and the mail receiver must use the new configuration.
As far as I understood I should create a new instance of MailReceiver when a new configuration is present and use it in the InboundChannelAdapter
Is there any best practice in order to do it? I found this solution: ImapMailReceiver NO STORE attempt on READ-ONLY folder (Failure) [THROTTLED];
In this solution I can inject the ThreadPoolTaskScheduler if I define it in my Configuration class; I can also inject the DirectChannel but every-time I should create a new MailReceiver and a new ImapIdleChannelAdapter without considering this WARN message I get when the
ImapIdleChannelAdapter starts:
java.lang.RuntimeException: No beanfactory at org.springframework.integration.expression.ExpressionUtils.createStandardEvaluationContext(ExpressionUtils.java:79) at org.springframework.integration.mail.AbstractMailReceiver.onInit(AbstractMailReceiver.java:403)
Is there a better way to satisfy my scenario?
Thank you
Angelo
The best way to do this is to use the Java DSL and dynamic flow registration.
Documentation here.
That way, you can unregister the old flow and register a new one, each time the configuration changes.
It will automatically handle injecting dependencies such as the bean factory.

Spark streaming using spring boot

Spring boot new for me as non web project. please guide me how to code Spark streaming in spring boot, i have already work in java-spark project and want to convert in spring boot non web application. any help or suggestion please.
Here is my Spark config
#Bean
public SparkConf sparkConf() {
SparkConf sparkConf = new SparkConf();
sparkConf.set("spark.app.name", "SparkReceiver"); //The name of application. This will appear in the UI and in log data.
//conf.set("spark.ui.port", "7077"); //Port for application's dashboard, which shows memory and workload data.
sparkConf.set("dynamicAllocation.enabled","false"); //Which scales the number of executors registered with this application up and down based on the workload
//conf.set("spark.cassandra.connection.host", "localhost"); //Cassandra Host Adddress/IP
sparkConf.set("spark.serializer","org.apache.spark.serializer.KryoSerializer"); //For serializing objects that will be sent over the network or need to be cached in serialized form.
sparkConf.set("spark.driver.allowMultipleContexts", "true");
sparkConf.setMaster("local[4]");
return sparkConf;
}
#Bean
public JavaSparkContext javaSparkContext() {
return new JavaSparkContext(sparkConf());
}
#Bean
public SparkSession sparkSession() {
return SparkSession
.builder()
.sparkContext(javaSparkContext().sc())
.appName("Java Spark SQL basic example")
.getOrCreate();
}
#Bean
public JavaStreamingContext javaStreamingContext(){
return new JavaStreamingContext(sparkConf(), new Duration(2000));
}
Here is my testing class
#Autowired
private JavaSparkContext sc;
#Autowired
private SparkSession session;
public void testMessage() throws InterruptedException{
JavaStreamingContext jsc = new JavaStreamingContext(sc, new Duration(2000));
Map<String, String> kafkaParams = new HashMap<String, String>();
kafkaParams.put("zookeeper.connect", "localhost:2181"); //Make all kafka data for this cluster appear under a particular path.
kafkaParams.put("group.id", "testgroup"); //String that uniquely identifies the group of consumer processes to which this consumer belongs
kafkaParams.put("metadata.broker.list", "localhost:9092"); //Producer can find a one or more Brokers to determine the Leader for each topic.
kafkaParams.put("serializer.class", "kafka.serializer.StringEncoder"); //Serializer to use when preparing the message for transmission to the Broker.
kafkaParams.put("request.required.acks", "1"); //Producer to require an acknowledgement from the Broker that the message was received.
Set<String> topics = Collections.singleton("16jnfbtopic");
//Create an input DStream for Receiving data from socket
JavaPairInputDStream<String, String> directKafkaStream = KafkaUtils.createDirectStream(jsc,
String.class,
String.class,
StringDecoder.class,
StringDecoder.class,
kafkaParams, topics);
//Create JavaDStream<String>
JavaDStream<String> msgDataStream = directKafkaStream.map(new Function<Tuple2<String, String>, String>() {
#Override
public String call(Tuple2<String, String> tuple2) {
return tuple2._2();
}
});
//Create JavaRDD<Row>
msgDataStream.foreachRDD(new VoidFunction<JavaRDD<String>>() {
#Override
public void call(JavaRDD<String> rdd) {
JavaRDD<Row> rowRDD = rdd.map(new Function<String, Row>() {
#Override
public Row call(String msg) {
Row row = RowFactory.create(msg);
return row;
}
});
//Create Schema
StructType schema = DataTypes.createStructType(new StructField[] {DataTypes.createStructField("Message", DataTypes.StringType, true)});
Dataset<Row> msgDataFrame = session.createDataFrame(rowRDD, schema);
msgDataFrame.show();
}
});
jsc.start();
jsc.awaitTermination();
while run this app i am getting error please Guide me.
Here is my error log
Eclipse Error Log

Resources