Spring Boot app cannot connect to Redis Replica in Docker - spring-boot

I am experiencing a weird issue with Redis connectivity in the Docker.
I have a simple Spring Boot application with a Master-Replica configuration.
As well as docker-compose config which I use to start Redis Master and Redis Replica.
If I start Redis via docker-compose and Spring Boot app as a simple java process outside of Docker, everything works fine. It can successfully connect to both Master and Replica via localhost.
If I launch Spring Boot app as a Docker container alongside Redis containers it can successfully connect to Master, but not Replica. Which means that I can write and read to Master node but when I try to read from Replica I get the following error:
redis-sample-app_1 | Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: redis-replica-a/172.31.0.2:7001
redis-sample-app_1 | Caused by: java.net.ConnectException: Connection refused
redis-sample-app_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_302]
In the redis.conf I've changed the following to bind it to all network interfaces:
bind * -::*
protected-mode no
docker-compose.yml
version: "3"
services:
redis-master:
image: redis:alpine
command: redis-server --include /usr/local/etc/redis/redis.conf
volumes:
- ./conf/redis-master.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
redis-replica-a:
image: redis:alpine
command: redis-server --include /usr/local/etc/redis/redis.conf
volumes:
- ./conf/redis-replica.conf:/usr/local/etc/redis/redis.conf
ports:
- "7001:6379"
redis-sample-app:
image: docker.io/library/redis-sample:0.0.1-SNAPSHOT
environment:
- SPRING_REDIS_HOST=redis-master
- SPRING_REDIS_PORT=6379
- SPRING_REDIS_REPLICAS=redis-replica-a:7001
ports:
- "9080:8080"
depends_on:
- redis-master
- redis-replica-a
application.yml
spring:
redis:
port: 6379
host: localhost
replicas: localhost:7001
RedisConfig.java
#Configuration
class RedisConfig {
private static final Logger LOG = LoggerFactory.getLogger(RedisConfig.class);
#Value("${spring.redis.replicas:}")
private String replicasProperty;
private final RedisProperties redisProperties;
public RedisConfig(RedisProperties redisProperties) {
this.redisProperties = redisProperties;
}
#Bean
public StringRedisTemplate masterReplicaRedisTemplate(LettuceConnectionFactory connectionFactory) {
return new StringRedisTemplate(connectionFactory);
}
#Bean
public LettuceConnectionFactory masterReplicaLettuceConnectionFactory(LettuceClientConfiguration lettuceConfig) {
LOG.info("Master: {}:{}", redisProperties.getHost(), redisProperties.getPort());
LOG.info("Replica property: {}", replicasProperty);
RedisStaticMasterReplicaConfiguration configuration = new RedisStaticMasterReplicaConfiguration(redisProperties.getHost(), redisProperties.getPort());
if (StringUtils.hasText(replicasProperty)) {
List<RedisURI> replicas = Arrays.stream(this.replicasProperty.split(",")).map(this::toRedisURI).collect(Collectors.toList());
LOG.info("Replica nodes: {}", replicas);
replicas.forEach(replica -> configuration.addNode(replica.getHost(), replica.getPort()));
}
return new LettuceConnectionFactory(configuration, lettuceConfig);
}
#Scope("prototype")
#Bean(destroyMethod = "shutdown")
ClientResources clientResources() {
return DefaultClientResources.create();
}
#Scope("prototype")
#Bean
LettuceClientConfiguration lettuceConfig(ClientResources dcr) {
ClientOptions options = ClientOptions.builder()
.timeoutOptions(TimeoutOptions.builder().fixedTimeout(Duration.of(5, ChronoUnit.SECONDS)).build())
.disconnectedBehavior(ClientOptions.DisconnectedBehavior.REJECT_COMMANDS)
.autoReconnect(true)
.build();
return LettuceClientConfiguration.builder()
.readFrom(ReadFrom.REPLICA_PREFERRED)
.clientOptions(options)
.clientResources(dcr)
.build();
}
#Bean
StringRedisTemplate redisTemplate(RedisConnectionFactory redisConnectionFactory) {
return new StringRedisTemplate(redisConnectionFactory);
}
private RedisURI toRedisURI(String url) {
String[] split = url.split(":");
String host = split[0];
int port;
if (split.length > 1) {
port = Integer.parseInt(split[1]);
} else {
port = 6379;
}
return RedisURI.create(host, port);
}
}
Please advise how to continue with troubleshooting.

When running everything (redis, replica and spring) inside the docker network you should be using port 6379 instead of 7001
7001 port can be used to connect from outside the container to it. But now you are trying to connect from container to container.
So change your environment variable to
SPRING_REDIS_REPLICAS=redis-replica-a:6379

Related

Why I am getting connection error between Azure Redis and spring boot?

I want to connect to Azure Redis from you spring boot application. But I am getting connections error io.lettuce.core.RedisException: Cannot obtain initial Redis Cluster topology and if I see detailed message I can see Cannot retrieve cluster partitions from [rediss://********************************************#myurl.redis.cache.windows.net:6380?timeout=3s]
I followed following codes to establish the connection.
In pom.xml.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
Then in the application yml.
spring:
cache:
type: redis
redis:
ssl: false
host: qortex-platform-dev-backoffice-portal.redis.cache.windows.net
port: 6380
password: 8RP9lcB7fbiX0ceCwNyF8dOe9pJ33w6YbAzCa
Configuration is like.
#Bean
public RedisClusterConfiguration defaultRedisConfig() {
RedisClusterConfiguration configuration = new RedisClusterConfiguration();
configuration.addClusterNode(new RedisNode("myurl.redis.cache.windows.net",6380));
configuration.setPassword(RedisPassword.of(redisPassword));
return configuration;
}
#Bean
#Primary
public RedisConnectionFactory redisConnectionFactory(RedisClusterConfiguration defaultRedisConfig) {
ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()
.enablePeriodicRefresh(Duration.ofMinutes(10L))
.enableAllAdaptiveRefreshTriggers()
.build();
LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
.useSsl().and()
.commandTimeout(Duration.ofMillis(3000))
.clientOptions(ClusterClientOptions.builder().topologyRefreshOptions(topologyRefreshOptions).build())
.build();
return new LettuceConnectionFactory(defaultRedisConfig, clientConfig);
}
#Bean
public RedisTemplate<String, SessionPayload> template(RedisConnectionFactory redisConnectionFactory) {
RedisTemplate<String, SessionPayload> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(redisConnectionFactory);
return redisTemplate;
}
In the logic.
SessionPayload sessionPayloadFetched = (SessionPayload) template.opsForHash().get(HASH_KEY, sessionPayload.getUserId());
Why I am getting this error?.
Make sure you have installed Redis Cluster locally in your machine.
$ wget http://download.redis.io/releases/redis-5.0.5.tar.gz
$ tar xzf redis-5.0.5.tar.gz
$ cd redis-5.0.5
$ make //The binaries that are now compiled are available in the src directory
$ src/redis-server
This should solve the error.

Spring Kafka receive and forward another broker

I'm using spring-kafka to receive and send messages.
What i'm gonna do is read some messages from X kafka broker and do some enhancements and send to another Y kafka broker
Here is my beans and configurations.
#Bean
public ProducerFactory<String, AuditLog> forwarderKafkaProducerFactory() {
Map<String, Object> configs = new HashMap<>();
configs.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9093");
configs.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configs.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(configs);
}
#Bean
public KafkaTemplate<String, AuditLog> forwarderKafkaClient() {
return new KafkaTemplate<>(forwarderKafkaProducerFactory());
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, AuditLog>> kafkaListenerContainerFactoryV2() {
ConcurrentKafkaListenerContainerFactory<String, AuditLog> factory = new ConcurrentKafkaListenerContainerFactory<>();
Map<String, Object> configs = new HashMap<>();
configs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092");
configs.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
configs.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
configs.put(ConsumerConfig.GROUP_ID_CONFIG, "receiver-sender");
configs.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
ConsumerFactory<String, AuditLog> consumerFactory = new DefaultKafkaConsumerFactory<>(
configs, new StringDeserializer(), new JsonDeserializer<>(AuditLog.class));
factory.setConsumerFactory(consumerFactory);
return factory;
}
as you can see one broker runs in 9092 the other one runs in 9093
Receive/Forward logic is like that.
#KafkaListener(topics = "audit_log", containerFactory = "kafkaListenerContainerFactoryV2")
public void listen(#Payload AuditLog payload) {
log.info("Audit log [{}]", payload);
if (!payload.isForwarded()) {
String key = "some-value";
String system = String.join("/", key, payload.getSystem());
payload.setSystem(system);
payload.setForwarded(true);
template.send("audit_log", key, payload);
}
}
Template which is above the code snippet configurations are correct. I can confirm by looking ((DefaultKafkaProducerFactory)template.getProducerFactory()).getConfigurationProperties()
In this configuration i can receive messages from 9092 but i can't send to 9093. Template always sending to 9092.
Thanks.
The problem is Kafka setup was incorrect.
I use wurstmeister/kafka on my current environment.
Docker compose file is like this.
zookeeper:
image: wurstmeister/zookeeper:3.4.6
kafka:
image: wurstmeister/kafka:2.12-2.5.0
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_ADVERTISED_HOST_NAME: "127.0.0.1"
depends_on:
- zookeeper
Then i just add second Kafka
zookeeper2:
image: wurstmeister/zookeeper:3.4.6
kafka2:
image: wurstmeister/kafka:2.12-2.5.0
ports:
- "9093:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: "zookeeper2:2181"
KAFKA_ADVERTISED_HOST_NAME: "127.0.0.1"
depends_on:
- zookeeper2
But registry owners warns :) I saw this warning now while writing this answer.
modify the KAFKA_ADVERTISED_HOST_NAME in docker-compose.yml to match your docker host IP (Note: Do not use localhost or 127.0.0.1 as the host ip if you want to run multiple brokers.)
I tried bitnami zookeeper/kafka
version: "2"
services:
zookeeper1:
image: docker.io/bitnami/zookeeper:3.7
ports:
- "2001:2181"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka1:
image: docker.io/bitnami/kafka:3
ports:
- "9201:9201"
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper1:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:9201
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka1:9092,EXTERNAL://localhost:9201
- KAFKA_INTER_BROKER_LISTENER_NAME=CLIENT
depends_on:
- zookeeper1
zookeeper2:
image: docker.io/bitnami/zookeeper:3.7
ports:
- "2002:2181"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka2:
image: docker.io/bitnami/kafka:3
ports:
- "9202:9202"
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper2:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:9202
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka2:9092,EXTERNAL://localhost:9202
- KAFKA_INTER_BROKER_LISTENER_NAME=CLIENT
depends_on:
- zookeeper2
The code in question runs correctly with this setup.

Spring boot is not able to read docker-compose environmental variables

I am having a "boring" problem here. Let me explain...
I am trying read some 'environmental variables' from my docker-compose, in my Application.properties.
Although, I am creating properly the env's variables in the containers(I can see that when I inspect the containers), they are not being read from the spring boot Applications.
Its kinda, these 'docker-compose environmental variables', seems like "are not accessible".
I am doing a TEST (); so, I am trying to get an environmental variable in my Spring Application, and it's resulting in 'null', such as:
#Slf4j
#SpringBootApplication(exclude = R2dbcAutoConfiguration.class)
public class AppDriver {
public static void main(String[] args) {
System.out.println( "container variables >>>>> " + System.getenv("MASTER_URL") );
SpringApplication.run(AppDriver.class,args);
}
The log is:
conteiner variables >>>>> null
Below, They are my docker-compose (i do not have any Spring boot service in the docker-compose[for now]):
version: "3.5"
services:
tenant1:
image: postgres
ports:
- "5432:5432"
restart: always
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: tenant1
POSTGRES_USER: user
volumes:
- ./data/tenant1:/var/lib/postgresql
mysql:
image: mysql:5.7
ports:
- "3306:3306"
environment:
MASTER_URL: r2dbc:mysql://user:password#localhost/master
MYSQL_ROOT_PASSWORD: mysecret
MYSQL_USER: user
MYSQL_PASSWORD: password
MYSQL_DATABASE: master
My Application.properties:
logging.level.com.example=DEBUG
logging.level.io.r2dbc=DEBUG
master.url = ${MASTER_URL}
And finally, my Java File:
#Configuration
//#ConfigurationProperties(prefix = "master")
#EnableR2dbcRepositories(entityOperationsRef = "masterEntityTemplate")
public class MasterConfig {
#Value("${master.url}")
private String url;
#Bean
#Qualifier(value = "masterConnectionFactory")
public ConnectionFactory masterConnectionFactory() {
System.out.println(">>>>>>>>>>>" + url);
return ConnectionFactories.get(url);
//.get("r2dbc:mysql://user:password#localhost/master");
}
Thats the final error I am taking:
Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'MASTER_URL' in value "${MASTER_URL}"
Someone know how to solve this problem?
Thanks a lot

Spring data cassandra - error while opening new channel

I have a problem with Cassandra's connection with spring-data. When Cassandra is running locally I have no problem with connecting, however when I ran my spring-boot app in k8s with external Cassandra I am stuck on WARN:
2020-07-24 10:26:32.398 WARN 6 --- [ s0-admin-0] c.d.o.d.internal.core.pool.ChannelPool : [s0|/127.0.0.1:9042] Error while opening new channel (ConnectionInitException: [s0|connecting...] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.7.2, CLIENT_ID=9679ee85-ff39-45b6-8573-62a8d827ec9e}): failed to send request (java.nio.channels.ClosedChannelException))
I don't understand why in the log I have [s0|/127.0.0.1:9042] instead of the IP of my contact points.
Spring configuration:
spring:
data:
cassandra:
keyspace-name: event_store
local-datacenter: datacenter1
contact-points: host1:9042,host2:9042
Also this WARN is not causing that spring-boot won't start however if I do query in service I have error:
{ error: "Query; CQL [com.datastax.oss.driver.internal.core.cql.DefaultSimpleStatement#9463dccc]; No node was available to execute the query; nested exception is com.datastax.oss.driver.api.core.NoNodeAvailableException: No node was available to execute the query" }
Option 1: test your yml file like this. (Have you tried with ip address?)
data:
cassandra:
keyspace-name: event_store
local-datacenter: datacenter1
port:9042
contact-points: host1,host2
username: cassandra
password: cassandra
Option 2: Create new properties on your yml and than a configuration class
cassandra:
database:
keyspace-name: event_store
contact-points: host1, host2
port: 9042
username: cassandra
password: cassandra
#Configuration
public class CassandraConfig extends AbstractCassandraConfiguration {
#Value("${cassandra.database.keyspace-name}")
private String keySpace;
#Value("${cassandra.database.contact-points}")
private String contactPoints;
#Value("${cassandra.database.port}")
private int port;
#Value("${cassandra.database.username}")
private String userName;
#Value("${cassandra.database.password}")
private String password;
#Override
protected String getKeyspaceName() {
return keySpace;
}
#Bean
public CassandraMappingContext cassandraMapping() throws ClassNotFoundException {
CassandraMappingContext context = new CassandraMappingContext();
context.setUserTypeResolver(new SimpleUserTypeResolver(cluster().getObject(), keySpace));
return context;
}
#Bean
public CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean cluster = super.cluster();
cluster.setUsername(userName);
cluster.setPassword(password);
cluster.setContactPoints(contactPoints);
cluster.setPort(port);
return cluster;
}
#Override
protected boolean getMetricsEnabled() {
return false;
}
}

Docker swarm springboot and eureka service discoverer not working

Currently working on swarmifying our Springboot microservice back-end with eureka service discoverer. The first problem was making sure the service discoverer doesn't pick de ingress IP-adress but instead IP-address from the overlay network. After some searching I found a post that suggest the following Eureka Client Configuration:
#Configuration
#EnableConfigurationProperties
public class EurekaClientConfig {
private ConfigurableEnvironment env;
public EurekaClientConfig(final ConfigurableEnvironment env) {
this.env = env;
}
#Bean
#Primary
public EurekaInstanceConfigBean eurekaInstanceConfigBean(final InetUtils inetUtils) throws IOException {
final String hostName = System.getenv("HOSTNAME");
String hostAddress = null;
final Enumeration<NetworkInterface> networkInterfaces = NetworkInterface.getNetworkInterfaces();
for (NetworkInterface netInt : Collections.list(networkInterfaces)) {
for (InetAddress inetAddress : Collections.list(netInt.getInetAddresses())) {
if (hostName.equals(inetAddress.getHostName())) {
hostAddress = inetAddress.getHostAddress();
System.out.printf("Inet used: %s", netInt.getName());
}
System.out.printf("Inet %s: %s / %s\n", netInt.getName(), inetAddress.getHostName(), inetAddress.getHostAddress());
}
}
if (hostAddress == null) {
throw new UnknownHostException("Cannot find ip address for hostname: " + hostName);
}
final int nonSecurePort = Integer.valueOf(env.getProperty("server.port", env.getProperty("port", "8080")));
final EurekaInstanceConfigBean instance = new EurekaInstanceConfigBean(inetUtils);
instance.setHostname(hostName);
instance.setIpAddress(hostAddress);
instance.setNonSecurePort(nonSecurePort);
System.out.println(instance);
return instance;
}
}
After deploying the new discoverer I got the correct result and the service discoverer had the correct overlay IP-address.
In order to understand the next step here is some information about the environment we run this docker swarm on. We currently have 2 droplets one for development and the other for production. Currently we are only working on the development server to Swarmify it. The production hasn't been touched in months.
The next step is to deploy a Discovery Client Springboot application that will connect to the correct service discoverer and also has the overly IP-address instead of the ingress. But when I build the application it always connects to our production service discoverer outside the docker swarm into the other droplet. I can see the application being deployed on the swarm but looking at the Eureka dashboard from the production server I can see that it connects to it.
The second problem is that the application also has the EurekaClient config you see above but it is ignored. Even the logs within the method is not called when starting up the applicaiton.
Here is the configuration from the Discovery Client application:
eureka:
client:
serviceUrl:
defaultZone: service-discovery_service:8761/eureka
enabled: false
instance:
instance-id: ${spring.application.name}:${random.value}
prefer-ip-address: true
spring:
application:
name: account-service
I assume that you can use defaultZone to point at the correct service discoverer but I can be wrong.
Just dont use an eureka service discoverer but something else like treafik. Much easier solution.

Resources