Lettuce reactive connection not established - spring

I'm trying to established reactive connection via lettuce,
connection
#Bean
public LettuceConnectionFactory redisConnectionFactory() {
return new LettuceConnectionFactory();
}
Reactive Redis Template
#Bean
public ReactiveRedisTemplate<String, Object> reactiveRedisTemplate(ReactiveRedisConnectionFactory connectionFactory) {
KryoSerializer<String> kryoSerializer = new KryoSerializer<>();
RedisSerializationContext<String, Object> serializationContext = RedisSerializationContext.<String, Object>newSerializationContext(new StringRedisSerializer())
.hashKey(new StringRedisSerializer())
.hashValue(kryoSerializer)
.build();
return new ReactiveRedisTemplate<>(connectionFactory, serializationContext);
}
After Debug the code,i found reactive connection not established, Anyone have correct configuration of Redis connection via lettuce.
enter image description here

If you are using spring boot try configuring connection in application.yaml or application.properties. See below example:
spring:
redis:
host: 127.0.0.1
port: 6379
timeout: 200
lettuce:
pool:
max-active: 16
max-idle: 16
min-idle: 8
time-between-eviction-runs: 9000
This is one way to configure Redis over lettuce, the above given values are indicative, use the configuration values as per your need.

seems like I am trapped in same situation like you.I used lettuce to connect with azure redis service and redis template function rightpop with limited waiting time,my yml config for redis is timeout: 600000
if rightpop parameter of limit time is more than 600000ms and the block quene in connection pool ,the connection with redis will not be reconnect again.
finally we change from lettuce to jedis,and error never raise again

Related

Spring ActiveMQ use JDBC with tcp

Given the fact that the only requirement I have is to use an instance of ActiveMQ, how would I make my ActiveMQ to use a JDBC connection without creating a embedded one with VM transport.
This is my factory bean:
#Bean
public ActiveMQConnectionFactory connectionFactory() {
logger.info("ActiveMQConnectionFactory");
ActiveMQConnectionFactory activeMQConnectionFactory = new ActiveMQConnectionFactory(brokerUrl);
activeMQConnectionFactory.setTrustAllPackages(true);
RedeliveryPolicy redeliveryPolicy = new RedeliveryPolicy();
redeliveryPolicy.setRedeliveryDelay(15000);
redeliveryPolicy.setMaximumRedeliveries(-1);
activeMQConnectionFactory.setRedeliveryPolicy(redeliveryPolicy);
return activeMQConnectionFactory;
}
I have an image of activemq exposed in the URL: tcp//0.0.0.0:61616 but even with the JDBC adapter configured as shown below I'm not able to persist messages in the SQL server, activemq is ingoring this and use the KahaDB as default. The only way found to use the jdbc is to change from tcp to vm:localhost but this creates an embedded activemq.
#Bean
public BrokerService broker(DataSource dataSource, ActiveMQConnectionFactory activeMQConnectionFactory) throws Exception {
logger.info("BrokerService");
final BrokerService broker = new BrokerService();
JDBCPersistenceAdapter jdbc = new JDBCPersistenceAdapter(dataSource, new OpenWireFormat());
jdbc.setUseLock(true);
Statements statements = jdbc.getStatements();
statements.setBinaryDataType(BINARY_DATA_TYPE);
broker.setUseJmx(true);
broker.setPersistent(true);
broker.setPersistenceAdapter(jdbc);
broker.addConnector(format("vm:(broker:(tcp://localhost:61616,network:static:%s)?persistent=true)", brokerUrl));
logger.info("BrokerService URL: " + broker.getTransportConnectors().get(0).getConnectUri().toString());
return broker;
}
Recommend using xml file to configure the broker. Easier to manage vs coding up a broker.
ActiveMQ JDBC info:
ref: https://activemq.apache.org/jdbc-support
Persistence Adapter info:
ref: https://activemq.apache.org/persistence
Starting an embedded broker referencing an xml file:
ref: https://activemq.apache.org/vm-transport-reference
specifically:
vm://localhost?brokerConfig=xbean:activemq.xml

Spring WebFlux SSE connection kept alive by Kubernetes NGINX Ingress after client disconnects

I have a Spring Boot service that streams updates to the client using Server-Sent Events (SSE). The endpoint to which the client connects is implemented using Spring WebFlux.
To clean up resources (delete an AMQP queue) my service needs to detect when a client closes the EventSource, i.e. terminates the connection. To do so, I register a callback via FluxSink#onDispose(Disposable). Naturally, my SSE Flux sends regular heartbeats to not only prevent the connection from timing out but also to trigger onDispose once the client has disconnected.
#Nonnull
#Override
public Flux<ServerSentEvent<?>> subscribeToNotifications(#Nonnull String queueName) {
final var queue = createQueue(queueName);
final var listenerContainer = createListenerContainer(queueName);
final var notificationStream = createNotificationStream(queueName, listenerContainer);
return notificationStream
.mergeWith(heartbeatStream)
.map(NotificationServiceImpl::toServerSentEvent);
}
#Nonnull
private Flux<NotificationDto> createNotificationStream(
#Nonnull String queueName,
#Nonnull MessageListenerContainer listenerContainer) {
return Flux.create(emitter -> {
listenerContainer.setupMessageListener(message -> handleAmqpMessage(message, emitter));
emitter.onRequest(consumer -> listenerContainer.start());
emitter.onDispose(() -> {
final var deleted = amqpAdmin.deleteQueue(queueName);
if (deleted) {
LOGGER.info("Queue {} successfully deleted", queueName);
} else {
LOGGER.warn("Failed to delete queue {}", queueName);
}
listenerContainer.stop();
});
});
}
This works like a charm locally; the queue is deleted/messages are logged once the client disconnects.
However, when deploying this service to my Kubernetes cluster, onDispose is never called. The SSE stream still works flawlessly, i.e. the client receives all data from the server and the connection is kept alive by the heartbeat.
I'm using a NGINX Ingress Controller to expose my service and it seems as the connection between NGINX and my service is kept alive even after the client disconnects, causing onDispose to never be called. Hence I tried setting the upstream keep-alive connections to 0, but it didn't solve the problem - the service is never notified about the client having closed the connection:
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.6
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.4
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
allow-snippet-annotations: 'true'
http-snippet: |
server{
listen 2443;
return 308 https://$host$request_uri;
}
proxy-real-ip-cidr: 192.168.0.0/16
use-forwarded-headers: 'true'
upstream-keepalive-connections: '0' # added this line
What am I missing?
Kubernetes version: 1.21
NGINX Ingress Controller version: 1.0.4 (YAML) with TLS termination on load balancer
Spring Boot version: 2.5.4

Spring Boot app cannot connect to Redis Replica in Docker

I am experiencing a weird issue with Redis connectivity in the Docker.
I have a simple Spring Boot application with a Master-Replica configuration.
As well as docker-compose config which I use to start Redis Master and Redis Replica.
If I start Redis via docker-compose and Spring Boot app as a simple java process outside of Docker, everything works fine. It can successfully connect to both Master and Replica via localhost.
If I launch Spring Boot app as a Docker container alongside Redis containers it can successfully connect to Master, but not Replica. Which means that I can write and read to Master node but when I try to read from Replica I get the following error:
redis-sample-app_1 | Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: redis-replica-a/172.31.0.2:7001
redis-sample-app_1 | Caused by: java.net.ConnectException: Connection refused
redis-sample-app_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_302]
In the redis.conf I've changed the following to bind it to all network interfaces:
bind * -::*
protected-mode no
docker-compose.yml
version: "3"
services:
redis-master:
image: redis:alpine
command: redis-server --include /usr/local/etc/redis/redis.conf
volumes:
- ./conf/redis-master.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
redis-replica-a:
image: redis:alpine
command: redis-server --include /usr/local/etc/redis/redis.conf
volumes:
- ./conf/redis-replica.conf:/usr/local/etc/redis/redis.conf
ports:
- "7001:6379"
redis-sample-app:
image: docker.io/library/redis-sample:0.0.1-SNAPSHOT
environment:
- SPRING_REDIS_HOST=redis-master
- SPRING_REDIS_PORT=6379
- SPRING_REDIS_REPLICAS=redis-replica-a:7001
ports:
- "9080:8080"
depends_on:
- redis-master
- redis-replica-a
application.yml
spring:
redis:
port: 6379
host: localhost
replicas: localhost:7001
RedisConfig.java
#Configuration
class RedisConfig {
private static final Logger LOG = LoggerFactory.getLogger(RedisConfig.class);
#Value("${spring.redis.replicas:}")
private String replicasProperty;
private final RedisProperties redisProperties;
public RedisConfig(RedisProperties redisProperties) {
this.redisProperties = redisProperties;
}
#Bean
public StringRedisTemplate masterReplicaRedisTemplate(LettuceConnectionFactory connectionFactory) {
return new StringRedisTemplate(connectionFactory);
}
#Bean
public LettuceConnectionFactory masterReplicaLettuceConnectionFactory(LettuceClientConfiguration lettuceConfig) {
LOG.info("Master: {}:{}", redisProperties.getHost(), redisProperties.getPort());
LOG.info("Replica property: {}", replicasProperty);
RedisStaticMasterReplicaConfiguration configuration = new RedisStaticMasterReplicaConfiguration(redisProperties.getHost(), redisProperties.getPort());
if (StringUtils.hasText(replicasProperty)) {
List<RedisURI> replicas = Arrays.stream(this.replicasProperty.split(",")).map(this::toRedisURI).collect(Collectors.toList());
LOG.info("Replica nodes: {}", replicas);
replicas.forEach(replica -> configuration.addNode(replica.getHost(), replica.getPort()));
}
return new LettuceConnectionFactory(configuration, lettuceConfig);
}
#Scope("prototype")
#Bean(destroyMethod = "shutdown")
ClientResources clientResources() {
return DefaultClientResources.create();
}
#Scope("prototype")
#Bean
LettuceClientConfiguration lettuceConfig(ClientResources dcr) {
ClientOptions options = ClientOptions.builder()
.timeoutOptions(TimeoutOptions.builder().fixedTimeout(Duration.of(5, ChronoUnit.SECONDS)).build())
.disconnectedBehavior(ClientOptions.DisconnectedBehavior.REJECT_COMMANDS)
.autoReconnect(true)
.build();
return LettuceClientConfiguration.builder()
.readFrom(ReadFrom.REPLICA_PREFERRED)
.clientOptions(options)
.clientResources(dcr)
.build();
}
#Bean
StringRedisTemplate redisTemplate(RedisConnectionFactory redisConnectionFactory) {
return new StringRedisTemplate(redisConnectionFactory);
}
private RedisURI toRedisURI(String url) {
String[] split = url.split(":");
String host = split[0];
int port;
if (split.length > 1) {
port = Integer.parseInt(split[1]);
} else {
port = 6379;
}
return RedisURI.create(host, port);
}
}
Please advise how to continue with troubleshooting.
When running everything (redis, replica and spring) inside the docker network you should be using port 6379 instead of 7001
7001 port can be used to connect from outside the container to it. But now you are trying to connect from container to container.
So change your environment variable to
SPRING_REDIS_REPLICAS=redis-replica-a:6379

Spring boot JMS DefaultListenerContainer occasionally drops connection and not autorevocered with Tibco EMS

Issue is similar to the one mentioned at Spring JMS Consumers to a TIBCO EMS Server expire on their own and have to restart our spring boot application to restablish the connection
and below is the code snippet we are using for Listener configuration
public JmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setMaxMessagesPerTask(5);
factory.setConcurrency("5");
return factory;
}
And connection factory
#Bean
public ConnectionFactory connectionFactory() {
ConnectionFactory connectionFactory = null
Tibjms.setPingInterval(10);
try {
TibjmsConnectionFactory tibjmsConnectionFactory = new TibjmsConnectionFactory(
environment.getProperty("url"));
//few more statments to set other properties
} catch (Exception ex) {
}
return connectionFactory;
}
Issue is observed during vpn failovers, we have active and failover vpn connection,when VPN switches, at application end netstat shows connection is established but at EMS end netstat indicates that connection is terminated or not found after few minutes, indicating no listener at EMS end.
We are using DefaultListnerContainer factory which supposed to poll and refresh connection if connection is terminated but unable to do so and have to restart the server
We suspect due to some configuration issues at VPN end, DefaultListnerContainer is not able to identify that connection has been terminated and unable to refresh JMS connection.
Please let me know if there are any other parameters or properties that can help DefaultListnerContainer to identify such scenarios.
If you look at the TIBCO EMS documentation : https://docs.tibco.com/pub/ems/8.5.1/doc/html/api/javadoc/com/tibco/tibjms/TibjmsConnectionFactory.html
You can see that there are parameters to manage reconnections :
setConnAttemptCount(int attempts)
setConnAttemptDelay(int delay)
setConnAttemptTimeout(int timeout)
setReconnAttemptCount(int attempts)
setReconnAttemptDelay(int delay)
setReconnAttemptTimeout(int timeout)
As an example you can use following values (delay and timeout are in msec):
setConnAttemptCount(int attempts) 60
setConnAttemptDelay(int delay) 2000
setConnAttemptTimeout(int timeout) 1000
setReconnAttemptCount(int attempts) 120
setReconnAttemptDelay(int delay) 2000
setReconnAttemptTimeout(int timeout) 1000
You can also define reconnection parameters in the Connection Factory definition, for example :
[QueueConnectionFactory]
type = queue
url = tcp://serveur1:7222,tcp://serveur2:7222
connect_attempt_count = 60
connect_attempt_delay = 2000
connect_attempt_timeout = 1000
reconnect_attempt_count = 120
reconnect_attempt_delay = 2000
reconnect_attempt_timeout= 1000
You may adjust values of the parameters to manage network issues that would last a long time.
Note also to make for the EMS client library to detect the loss of the connection to the EMS and trigger the reconnection mechanisms you need to have the following parameters in the EMS tibemsd.conf file (duration in seconds here):
client_heartbeat_server = 20
server_timeout_client_connection = 90
server_heartbeat_client = 20
client_timeout_server_connection = 90
The above should resolve your issue but I recommend to do test to adjust the values of the reconnection parameters

How to use JedisConfig pool efficiently without increasing number of connections more than maxtotal?

Jedis pool is not working as expected .I have mentioned active connections 10 but it is allowing even above 10 connections.
I have overridden getConnection() method from RedisConnectionFactory. This method has been called almost for 30 times for getting the connection.
I have configured the jedis config pool as mentioned below.
Can some one please help me out why it is creating the connections more than the maxtotal? And can someone please help me out with the closing of jedisconnection pool as well.
#Configuration
public class RedisConfiguration {
#Bean
public RedisTenantDataFactory redisTenantDataFactory(){
JedisPoolConfig poolConfig = new JedisPoolConfig();
poolConfig.setMaxIdle(1);
poolConfig.setMaxTotal(10);
poolConfig.setBlockWhenExhausted(true);
poolConfig.setMaxWaitMillis(10);
JedisConnectionFactory jedisConnectionFactory = new
JedisConnectionFactory(poolConfig);
jedisConnectionFactory.setHostName(redisHost);
jedisConnectionFactory.setUsePool(true);
jedisConnectionFactory.setPort(Integer.valueOf(redisPort));
}
#####
#Bean
public RedisTemplate<String, Object> redisTemplate(#Autowired RedisConnectionFactory redisConnectionFactory) {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(redisConnectionFactory);
template.afterPropertiesSet();
return template;
}
}
I have overridden getConnection() method from RedisConnectionFactory. This method has been called almost for 30 times for getting the connection.
It is probably a misunderstanding of the ConnectionPool behaviour. Without having the details about how you are using the pool in your application I guess.
So you pool is configured as followed:
...
poolConfig.setMaxIdle(1);
poolConfig.setMaxTotal(10);
poolConfig.setBlockWhenExhausted(true)
...
This means as you expect, you will not have more than 10 active connections from this specific pool to Redis.
You can check the number of clients (open connection) from Redis itself using RedisInsight or using the command CLIENT LIST, you will see that you will not have more than 10 connections coming from this JVM.
The fact that your see many call to getConnection() is just because your application is calling it each time a connection is needed.
This does NOT means "open a new connection", this means "give me a connection from the pool", and your configuration define the behaviour, as follow:
poolConfig.setMaxIdle(1) => you will have at least always 1 connection open and available for your application. This is important to chose a good number since "creating a new connection" is taking time and resources. (1 is probably too low in a normal application)
poolConfig.setMaxTotal(10) => this mean that the pool will not have more than 10 connections open in the same time. So you MUST define what happen when you have reach 10, and your app need one. This is where
poolConfig.setBlockWhenExhausted(true) => This means that if you have already 10 "active" connections used by your application, and the application call getConnection(), it will "block" until one of the 10 connections is returned to the pool.
So "blocking" is probably not a very good idea... (but once again it depends of your application)
Maybe you are wondering why your application is calling the getConnection() 30 times, and why it does not stop/block at 10....
Because your code is good ;), what I mean by that your application:
1- Jedis jedis = pool.getCoonnection(); (so it takes one active connection from the pool)
2- you are using jedis connection as much as needed
3- you close the connection jedis.close() ( this does not necessary close the real connection, it returns back the connection to the pool, and the pool can reuse it or close it depending of the application/configuration)
Does it make sense?
Usually you will work with the following code
/// Jedis implements Closeable. Hence, the jedis instance will be auto-closed after the last statement.
try (Jedis jedis = pool.getResource()) {
/// ... do stuff here ... for example
jedis.set("foo", "bar");
String foobar = jedis.get("foo");
jedis.zadd("sose", 0, "car"); jedis.zadd("sose", 0, "bike");
Set<String> sose = jedis.zrange("sose", 0, -1);
}
/// ... when closing your application:
pool.close()
You can find more information about JedisPool and Apache CommonPool here:
Getting-started
Apache Commons Pool

Resources