Spring boot is not able to read docker-compose environmental variables - spring-boot

I am having a "boring" problem here. Let me explain...
I am trying read some 'environmental variables' from my docker-compose, in my Application.properties.
Although, I am creating properly the env's variables in the containers(I can see that when I inspect the containers), they are not being read from the spring boot Applications.
Its kinda, these 'docker-compose environmental variables', seems like "are not accessible".
I am doing a TEST (); so, I am trying to get an environmental variable in my Spring Application, and it's resulting in 'null', such as:
#Slf4j
#SpringBootApplication(exclude = R2dbcAutoConfiguration.class)
public class AppDriver {
public static void main(String[] args) {
System.out.println( "container variables >>>>> " + System.getenv("MASTER_URL") );
SpringApplication.run(AppDriver.class,args);
}
The log is:
conteiner variables >>>>> null
Below, They are my docker-compose (i do not have any Spring boot service in the docker-compose[for now]):
version: "3.5"
services:
tenant1:
image: postgres
ports:
- "5432:5432"
restart: always
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: tenant1
POSTGRES_USER: user
volumes:
- ./data/tenant1:/var/lib/postgresql
mysql:
image: mysql:5.7
ports:
- "3306:3306"
environment:
MASTER_URL: r2dbc:mysql://user:password#localhost/master
MYSQL_ROOT_PASSWORD: mysecret
MYSQL_USER: user
MYSQL_PASSWORD: password
MYSQL_DATABASE: master
My Application.properties:
logging.level.com.example=DEBUG
logging.level.io.r2dbc=DEBUG
master.url = ${MASTER_URL}
And finally, my Java File:
#Configuration
//#ConfigurationProperties(prefix = "master")
#EnableR2dbcRepositories(entityOperationsRef = "masterEntityTemplate")
public class MasterConfig {
#Value("${master.url}")
private String url;
#Bean
#Qualifier(value = "masterConnectionFactory")
public ConnectionFactory masterConnectionFactory() {
System.out.println(">>>>>>>>>>>" + url);
return ConnectionFactories.get(url);
//.get("r2dbc:mysql://user:password#localhost/master");
}
Thats the final error I am taking:
Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'MASTER_URL' in value "${MASTER_URL}"
Someone know how to solve this problem?
Thanks a lot

Related

Spring Kafka receive and forward another broker

I'm using spring-kafka to receive and send messages.
What i'm gonna do is read some messages from X kafka broker and do some enhancements and send to another Y kafka broker
Here is my beans and configurations.
#Bean
public ProducerFactory<String, AuditLog> forwarderKafkaProducerFactory() {
Map<String, Object> configs = new HashMap<>();
configs.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9093");
configs.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configs.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(configs);
}
#Bean
public KafkaTemplate<String, AuditLog> forwarderKafkaClient() {
return new KafkaTemplate<>(forwarderKafkaProducerFactory());
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, AuditLog>> kafkaListenerContainerFactoryV2() {
ConcurrentKafkaListenerContainerFactory<String, AuditLog> factory = new ConcurrentKafkaListenerContainerFactory<>();
Map<String, Object> configs = new HashMap<>();
configs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092");
configs.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
configs.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
configs.put(ConsumerConfig.GROUP_ID_CONFIG, "receiver-sender");
configs.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
ConsumerFactory<String, AuditLog> consumerFactory = new DefaultKafkaConsumerFactory<>(
configs, new StringDeserializer(), new JsonDeserializer<>(AuditLog.class));
factory.setConsumerFactory(consumerFactory);
return factory;
}
as you can see one broker runs in 9092 the other one runs in 9093
Receive/Forward logic is like that.
#KafkaListener(topics = "audit_log", containerFactory = "kafkaListenerContainerFactoryV2")
public void listen(#Payload AuditLog payload) {
log.info("Audit log [{}]", payload);
if (!payload.isForwarded()) {
String key = "some-value";
String system = String.join("/", key, payload.getSystem());
payload.setSystem(system);
payload.setForwarded(true);
template.send("audit_log", key, payload);
}
}
Template which is above the code snippet configurations are correct. I can confirm by looking ((DefaultKafkaProducerFactory)template.getProducerFactory()).getConfigurationProperties()
In this configuration i can receive messages from 9092 but i can't send to 9093. Template always sending to 9092.
Thanks.
The problem is Kafka setup was incorrect.
I use wurstmeister/kafka on my current environment.
Docker compose file is like this.
zookeeper:
image: wurstmeister/zookeeper:3.4.6
kafka:
image: wurstmeister/kafka:2.12-2.5.0
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_ADVERTISED_HOST_NAME: "127.0.0.1"
depends_on:
- zookeeper
Then i just add second Kafka
zookeeper2:
image: wurstmeister/zookeeper:3.4.6
kafka2:
image: wurstmeister/kafka:2.12-2.5.0
ports:
- "9093:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: "zookeeper2:2181"
KAFKA_ADVERTISED_HOST_NAME: "127.0.0.1"
depends_on:
- zookeeper2
But registry owners warns :) I saw this warning now while writing this answer.
modify the KAFKA_ADVERTISED_HOST_NAME in docker-compose.yml to match your docker host IP (Note: Do not use localhost or 127.0.0.1 as the host ip if you want to run multiple brokers.)
I tried bitnami zookeeper/kafka
version: "2"
services:
zookeeper1:
image: docker.io/bitnami/zookeeper:3.7
ports:
- "2001:2181"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka1:
image: docker.io/bitnami/kafka:3
ports:
- "9201:9201"
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper1:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:9201
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka1:9092,EXTERNAL://localhost:9201
- KAFKA_INTER_BROKER_LISTENER_NAME=CLIENT
depends_on:
- zookeeper1
zookeeper2:
image: docker.io/bitnami/zookeeper:3.7
ports:
- "2002:2181"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka2:
image: docker.io/bitnami/kafka:3
ports:
- "9202:9202"
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper2:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:9202
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka2:9092,EXTERNAL://localhost:9202
- KAFKA_INTER_BROKER_LISTENER_NAME=CLIENT
depends_on:
- zookeeper2
The code in question runs correctly with this setup.

Spring Boot app cannot connect to Redis Replica in Docker

I am experiencing a weird issue with Redis connectivity in the Docker.
I have a simple Spring Boot application with a Master-Replica configuration.
As well as docker-compose config which I use to start Redis Master and Redis Replica.
If I start Redis via docker-compose and Spring Boot app as a simple java process outside of Docker, everything works fine. It can successfully connect to both Master and Replica via localhost.
If I launch Spring Boot app as a Docker container alongside Redis containers it can successfully connect to Master, but not Replica. Which means that I can write and read to Master node but when I try to read from Replica I get the following error:
redis-sample-app_1 | Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: redis-replica-a/172.31.0.2:7001
redis-sample-app_1 | Caused by: java.net.ConnectException: Connection refused
redis-sample-app_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_302]
In the redis.conf I've changed the following to bind it to all network interfaces:
bind * -::*
protected-mode no
docker-compose.yml
version: "3"
services:
redis-master:
image: redis:alpine
command: redis-server --include /usr/local/etc/redis/redis.conf
volumes:
- ./conf/redis-master.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
redis-replica-a:
image: redis:alpine
command: redis-server --include /usr/local/etc/redis/redis.conf
volumes:
- ./conf/redis-replica.conf:/usr/local/etc/redis/redis.conf
ports:
- "7001:6379"
redis-sample-app:
image: docker.io/library/redis-sample:0.0.1-SNAPSHOT
environment:
- SPRING_REDIS_HOST=redis-master
- SPRING_REDIS_PORT=6379
- SPRING_REDIS_REPLICAS=redis-replica-a:7001
ports:
- "9080:8080"
depends_on:
- redis-master
- redis-replica-a
application.yml
spring:
redis:
port: 6379
host: localhost
replicas: localhost:7001
RedisConfig.java
#Configuration
class RedisConfig {
private static final Logger LOG = LoggerFactory.getLogger(RedisConfig.class);
#Value("${spring.redis.replicas:}")
private String replicasProperty;
private final RedisProperties redisProperties;
public RedisConfig(RedisProperties redisProperties) {
this.redisProperties = redisProperties;
}
#Bean
public StringRedisTemplate masterReplicaRedisTemplate(LettuceConnectionFactory connectionFactory) {
return new StringRedisTemplate(connectionFactory);
}
#Bean
public LettuceConnectionFactory masterReplicaLettuceConnectionFactory(LettuceClientConfiguration lettuceConfig) {
LOG.info("Master: {}:{}", redisProperties.getHost(), redisProperties.getPort());
LOG.info("Replica property: {}", replicasProperty);
RedisStaticMasterReplicaConfiguration configuration = new RedisStaticMasterReplicaConfiguration(redisProperties.getHost(), redisProperties.getPort());
if (StringUtils.hasText(replicasProperty)) {
List<RedisURI> replicas = Arrays.stream(this.replicasProperty.split(",")).map(this::toRedisURI).collect(Collectors.toList());
LOG.info("Replica nodes: {}", replicas);
replicas.forEach(replica -> configuration.addNode(replica.getHost(), replica.getPort()));
}
return new LettuceConnectionFactory(configuration, lettuceConfig);
}
#Scope("prototype")
#Bean(destroyMethod = "shutdown")
ClientResources clientResources() {
return DefaultClientResources.create();
}
#Scope("prototype")
#Bean
LettuceClientConfiguration lettuceConfig(ClientResources dcr) {
ClientOptions options = ClientOptions.builder()
.timeoutOptions(TimeoutOptions.builder().fixedTimeout(Duration.of(5, ChronoUnit.SECONDS)).build())
.disconnectedBehavior(ClientOptions.DisconnectedBehavior.REJECT_COMMANDS)
.autoReconnect(true)
.build();
return LettuceClientConfiguration.builder()
.readFrom(ReadFrom.REPLICA_PREFERRED)
.clientOptions(options)
.clientResources(dcr)
.build();
}
#Bean
StringRedisTemplate redisTemplate(RedisConnectionFactory redisConnectionFactory) {
return new StringRedisTemplate(redisConnectionFactory);
}
private RedisURI toRedisURI(String url) {
String[] split = url.split(":");
String host = split[0];
int port;
if (split.length > 1) {
port = Integer.parseInt(split[1]);
} else {
port = 6379;
}
return RedisURI.create(host, port);
}
}
Please advise how to continue with troubleshooting.
When running everything (redis, replica and spring) inside the docker network you should be using port 6379 instead of 7001
7001 port can be used to connect from outside the container to it. But now you are trying to connect from container to container.
So change your environment variable to
SPRING_REDIS_REPLICAS=redis-replica-a:6379

Speed up YAML file processing in Spring boot

I'm trying to read YAML properties file using the #PropertySource annotation in Spring Boot.
configuration.yaml file has about 7.5K lines of data in the below format:
# Configuration for privilege management
---
role-configuration:
roles:
- name: Agent
groups:
-name: privilege-group
group-configuration:
groups:
- name: privilege-group
privilege-configuration:
privileges:
- name: admin-dashboard-view
description: View Admin dashboard
groups:
- name: privilege-group
- name: admin-dashboard-edit
description: Edit Admin dashboard
groups:
- name: privilege-group
....
...
..
Configuration bean and YAML specific PropertySourceFactory has been implemented by following this link
PrivilegeProvider.java
#Configuration
#ConfigurationProperties(prefix = "privilege-configuration")
#PropertySource(value = "classpath:security/configuration.yaml", factory = YamlPropertySourceFactory.class)
#Data # lombok annotation for generating getter/setter and other helper functions
public class PrivilegeProvider {
private List<Privilege> privileges;
}
YamlPropertySourceFactory.java
public class YamlPropertySourceFactory implements PropertySourceFactory {
#Override
public PropertySource<?> createPropertySource(String name, EncodedResource encodedResource) throws IOException {
YamlPropertiesFactoryBean factory = new YamlPropertiesFactoryBean();
factory.setResources(encodedResource.getResource());
Properties properties = factory.getObject();
return new PropertiesPropertySource(encodedResource.getResource().getFilename(), properties);
}
}
All the data is successfully loaded from YAML file but it is taking ~5-7 minutes to load, which is quite a lot of time given the file size.
Can this be optimized? or is there any other way in which I can implement the same?

Docker swarm springboot and eureka service discoverer not working

Currently working on swarmifying our Springboot microservice back-end with eureka service discoverer. The first problem was making sure the service discoverer doesn't pick de ingress IP-adress but instead IP-address from the overlay network. After some searching I found a post that suggest the following Eureka Client Configuration:
#Configuration
#EnableConfigurationProperties
public class EurekaClientConfig {
private ConfigurableEnvironment env;
public EurekaClientConfig(final ConfigurableEnvironment env) {
this.env = env;
}
#Bean
#Primary
public EurekaInstanceConfigBean eurekaInstanceConfigBean(final InetUtils inetUtils) throws IOException {
final String hostName = System.getenv("HOSTNAME");
String hostAddress = null;
final Enumeration<NetworkInterface> networkInterfaces = NetworkInterface.getNetworkInterfaces();
for (NetworkInterface netInt : Collections.list(networkInterfaces)) {
for (InetAddress inetAddress : Collections.list(netInt.getInetAddresses())) {
if (hostName.equals(inetAddress.getHostName())) {
hostAddress = inetAddress.getHostAddress();
System.out.printf("Inet used: %s", netInt.getName());
}
System.out.printf("Inet %s: %s / %s\n", netInt.getName(), inetAddress.getHostName(), inetAddress.getHostAddress());
}
}
if (hostAddress == null) {
throw new UnknownHostException("Cannot find ip address for hostname: " + hostName);
}
final int nonSecurePort = Integer.valueOf(env.getProperty("server.port", env.getProperty("port", "8080")));
final EurekaInstanceConfigBean instance = new EurekaInstanceConfigBean(inetUtils);
instance.setHostname(hostName);
instance.setIpAddress(hostAddress);
instance.setNonSecurePort(nonSecurePort);
System.out.println(instance);
return instance;
}
}
After deploying the new discoverer I got the correct result and the service discoverer had the correct overlay IP-address.
In order to understand the next step here is some information about the environment we run this docker swarm on. We currently have 2 droplets one for development and the other for production. Currently we are only working on the development server to Swarmify it. The production hasn't been touched in months.
The next step is to deploy a Discovery Client Springboot application that will connect to the correct service discoverer and also has the overly IP-address instead of the ingress. But when I build the application it always connects to our production service discoverer outside the docker swarm into the other droplet. I can see the application being deployed on the swarm but looking at the Eureka dashboard from the production server I can see that it connects to it.
The second problem is that the application also has the EurekaClient config you see above but it is ignored. Even the logs within the method is not called when starting up the applicaiton.
Here is the configuration from the Discovery Client application:
eureka:
client:
serviceUrl:
defaultZone: service-discovery_service:8761/eureka
enabled: false
instance:
instance-id: ${spring.application.name}:${random.value}
prefer-ip-address: true
spring:
application:
name: account-service
I assume that you can use defaultZone to point at the correct service discoverer but I can be wrong.
Just dont use an eureka service discoverer but something else like treafik. Much easier solution.

Spring boot not registering on promethes end point

I am trying to configure Prometheus and Grafana with spring boot.
#Configuration
#EnableSpringBootMetricsCollector
public class MetricsConfiguration {
/**
* Register common tags application instead of job. This application tag is
* needed for Grafana dashboard.
*
* #return registry with registered tags.
*/
#Value("${spring.application.name}")
private String applicationName;
#Value("${spring.profiles.active}")
private String environment;
#Bean
public MeterRegistryCustomizer<MeterRegistry> metricsCommonTags() {
return registry -> {
registry.config().commonTags("application", applicationName, "environment", environment)
.meterFilter(getDefualtConfig());
};
}
private MeterFilter getDefualtConfig() {
return new MeterFilter() {
#Override
public DistributionStatisticConfig configure(Meter.Id id, DistributionStatisticConfig config) {
return DistributionStatisticConfig.builder().percentilesHistogram(true).percentiles(0.95, 0.99).build()
.merge(config);
}
};
}
}
while running the application I am able to see traing on localhost:8080/prometheus url.
but same I am not able to see on localhost:9090/metrics url which is Prometheus URL.
I have added the configuration in prometheus.yml and restarted the Prometheus.
- job_name: 'my-api'
scrape_interval: 10s
metrics_path: '/prometheus'
target_groups:
- targets: ['localhost:8080']
After spending 2 hours found the solution,
we were using basic auth for all health points also
The issue was that I was not setting up basic auth in my proemtheus.yml
- job_name: 'my-api'
scrape_interval: 10s
metrics_path: '/prometheus'
target_groups:
- targets: ['localhost:8080']
basic_auth:
username: test
password: test

Resources