Springboot docker-compose redis Cannot get Jedis connection; - spring-boot

This may seems as duplicate of docker-compose with springboot and redis and docker-compose redis connection issue but the solutions proposed there are not working in my case.
I am using springboot and the other services are unable to connect to redis service.
Below is my docker compose file:
version: "3.7"
services:
...
users:
build: ./users
ports:
- "8081:8081"
networks:
- jiji-microservices-network
depends_on:
- registry
- gateway
- redis_cache
- postgresDB
- rabbitmq
links:
- redis_cache
- postgresDB
- rabbitmq
environment:
# SPRING_CACHE_TYPE: redis
# SPRING_REDIS_HOST: redis_cache
# SPRING_REDIS_PORT: 6379
# SPRING_REDIS_PASSWORD:
SPRING_DATASOURCE_URL: jdbc:postgresql://postgresDB:5432/jiji_users
SPRING_DATASOURCE_USERNAME: postgres
SPRING_DATASOURCE_PASSWORD: postgres
SPRING_JPA_HIBERNATE_DDL_AUTO: update
EUREKA_CLIENT_SERVICEURL_DEFAULTZONE: http://registry:8090/eureka
postgresDB:
image: postgres
restart: unless-stopped
ports:
- "5432:5432"
networks:
- jiji-microservices-network
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: jiji_users
redis_cache:
image: redis:latest
restart: on-failure
command: ["redis-server","--bind","redis_cache","--port","6379"]
ports:
- "6379:6379"
networks:
- jiji-microservices-network
rabbitmq:
image: rabbitmq:management
ports:
- "5672:5672"
- "15672:15672"
networks:
- jiji-microservices-network
networks:
jiji-microservices-network:
driver: bridge
Here below is my application.yml file:
...
cache:
type: redis
redis:
host: redis_cache
port: 6379
# cache-null-values: true
time-to-live: 2592000 #30 days
...
The error message I am getting:
users_1 | 2022-03-28 02:53:40.417 INFO 1 --- [nio-8081-exec-4] c.o.u.s.CustomAuthorizationFilter : /api/v1/users/getCode
users_1 | Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
docker ps shows that all the containers are up and running and that redis is listening on port 6379.
PS: It works fine without docker.

The issue was related to my Redisconfiguration. Fixed bit adding redis host and redis port to JedisConnectionFactory Bean.
#Configuration
#Slf4j
public class RedisConfiguration {
#Value("${spring.redis.host}")
private String REDIS_HOST;
#Value("${spring.redis.port}")
private Integer REDIS_PORT;
#Bean
public JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration config = new RedisStandaloneConfiguration(REDIS_HOST, REDIS_PORT);
return new JedisConnectionFactory(config);
}
#Bean
public RedisTemplate<String, Object> redisTemplate() {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setEnableTransactionSupport(true);
template.setConnectionFactory(jedisConnectionFactory());
return template;
}
}

Related

Connecting the test class in the spring boot project to the in-memory database

application.properties:
server.port=8081
spring.jpa.hibernate.ddl-auto=update
spring.datasource.url=jdbc:mysql://localhost:3306/familybudgetapp
spring.datasource.username=root
spring.datasource.password=12345
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
``
test\ application.properties :
server.port=8081
spring.jpa.hibernate.ddl-auto=update
spring.datasource.url=jdbc:mysql://localhost:3306/familybudgetapp
spring.datasource.username=root
spring.datasource.password=
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
``
./env :
MYSQLDB_USER=root
MYSQLDB_ROOT_PASSWORD=12345
MYSQLDB_DATABASE=familybudgetapp
MYSQLDB_LOCAL_PORT=3307
MYSQLDB_DOCKER_PORT=3306
SPRING_LOCAL_PORT=8081
SPRING_DOCKER_PORT=8081
``
dockerfile:
FROM openjdk:17-jdk-slim
EXPOSE 8081
ARG JAR_FILE=target/family-budget-app-0.0.1-SNAPSHOT.jar
ADD ${JAR_FILE} family-budget-app.jar
ENTRYPOINT ["java","-jar","family-budget-app.jar"]
``
docker-compose.yml:
version: "3.8"
services:
mysqldb:
image: mysql
restart: unless-stopped
env_file: ./.env
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- ${MYSQLDB_LOCAL_PORT}:${MYSQLDB_DOCKER_PORT}
volumes:
- db:/var/lib/mysql
web_app:
depends_on:
- mysqldb
build:
context: .
dockerfile: Dockerfile
restart: on-failure
env_file: ./.env
environment:
SPRING_APPLICATION_JSON: '{
"spring.datasource.url" : "jdbc:mysql://mysqldb:$MYSQLDB_DOCKER_PORT/$MYSQLDB_DATABASE?allowPublicKeyRetrieval=true&useSSL=false",
"spring.datasource.username" : "$MYSQLDB_USER",
"spring.datasource.password" : "$MYSQLDB_ROOT_PASSWORD",
"spring.jpa.properties.hibernate.dialect" : "org.hibernate.dialect.MySQLDialect",
"spring.jpa.hibernate.ddl-auto" : "update"
}'
volumes:
- .m2:/root/.m2
stdin_open: true
tty: true
volumes:
db:
``
repository test file:
#DataJpaTest
#AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
class SpendingRepositoryTest {
#Autowired
SpendingRepository spendingRepository;
#Test
#Sql({"/data.sql"})
void findMostSpendingDetailsByDate_GivenDate_ReturnSpendings() throws ParseException {
Date startDate = new SimpleDateFormat("yyyy-MM-dd").parse("2001-06-01");
Date endDate = new SimpleDateFormat("yyyy-MM-dd").parse("2001-06-30");
List<Spending> spendings = spendingRepository.findMostSpendingDetailsByDate(startDate, endDate);
assertEquals(5, spendings.size());
assertEquals("tahafurkanunsal", spendings.get(0).getUser().getUsername());
}
}
I have a spring boot mysql project and I want it to work with docker, but since the repository test file in my project tests a real database and is intended to run without the need for mysql and maven when using docker, my test class throws a database connection error while running docker, so I want to set my test class to my in-memory database. I want to set. I want to make it work by connecting it with Docker, but I searched and couldn't get an idea.
I tried to configure h2 database but failed I guess I need some help

Spring boot cannot connect with Elasticsearch image on server

i have spring boot project and using Elasticsearch inside it
i am run Elasticsearch by docker image
i think backend code cannot connect with Elasticsearch
but when run production in server i have following error
Caused by: org.springframework.data.elasticsearch.ElasticsearchException: Error while for indexExists request: org.elasticsearch.action.admin.indices.get.GetIndexRequest#446b74b9
at org.springframework.data.elasticsearch.core.ElasticsearchRestTemplate.indexExists(ElasticsearchRestTemplate.java:842)
at org.springframework.data.elasticsearch.core.ElasticsearchRestTemplate.createIndexIfNotCreated(ElasticsearchRestTemplate.java:1219)
at org.springframework.data.elasticsearch.core.ElasticsearchRestTemplate.createIndex(ElasticsearchRestTemplate.java:251)
at org.springframework.data.elasticsearch.repository.support.AbstractElasticsearchRepository.createIndex(AbstractElasticsearchRepository.java:99)
at org.springframework.data.elasticsearch.repository.support.AbstractElasticsearchRepository.<init>(AbstractElasticsearchRepository.java:89)
at org.springframework.data.elasticsearch.repository.support.SimpleElasticsearchRepository.<init>(SimpleElasticsearchRepository.java:39)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.base/java.lang.reflect.Constructor.newInstance(Unknown Source)
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:204)
... 69 common frames omitted
Caused by: java.net.ConnectException: Timeout connecting to [/206.189.178.228:9200]
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:959)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:233)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1764)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1734)
at org.elasticsearch.client.IndicesClient.exists(IndicesClient.java:1109)
at org.springframework.data.elasticsearch.core.ElasticsearchRestTemplate.indexExists(ElasticsearchRestTemplate.java:840)
... 79 common frames omitted
Caused by: java.net.ConnectException: Timeout connecting to [/206.189.178.228:9200]
at org.apache.http.nio.pool.RouteSpecificPool.timeout(RouteSpecificPool.java:169)
at org.apache.http.nio.pool.AbstractNIOConnPool.requestTimeout(AbstractNIOConnPool.java:632)
at org.apache.http.nio.pool.AbstractNIOConnPool$InternalSessionRequestCallback.timeout(AbstractNIOConnPool.java:898)
at org.apache.http.impl.nio.reactor.SessionRequestImpl.timeout(SessionRequestImpl.java:198)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processTimeouts(DefaultConnectingIOReactor.java:213)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:158)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351)
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221)
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
at java.base/java.lang.Thread.run(Unknown Source)
this is elasticSearch.yml
version: '2'
services:
searchservice-elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.1.0
# volumes:
# - ~/volumes/jhipster/SearchService/elasticsearch/:/usr/share/elasticsearch/data/
ports:
- 9200:9200
- 9300:9300
environment:
- 'ES_JAVA_OPTS=-Xms1024m -Xmx1024m'
- 'discovery.type=single-node'
this is searchService.yml
version: '2'
services:
food-search-service:
image: altshiftcreative/food-app-search-service:v1.6
environment:
# - _JAVA_OPTIONS=-Xmx512m -Xms256m
- SPRING_PROFILES_ACTIVE=prod,swagger
- MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED=false
- EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/eureka
- SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/config
- SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_OIDC_ISSUER_URI=https://shopbia.shop/auth/realms/jhipster
- SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_OIDC_CLIENT_ID=internal
- SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_OIDC_CLIENT_SECRET=internal
- SPRING_DATA_JEST_URI=http://206.189.178.228:9200
- SPRING_ELASTICSEARCH_REST_URIS=http://0.0.0.0:9200
- xpack.security.http.ssl.enabled=false
- elasticsearch.host=http://206.189.178.228:9200
# - JHIPSTER_SLEEP=30 # gives time for other services to boot before the application
- KAFKA_BOOTSTRAPSERVERS=kafka:9092
ports:
- 8088:8088
networks:
- food_default
networks:
food_default:
external: true
and this is elastic search configuration inside my project
public class ElasticsearchConfiguration extends AbstractElasticsearchConfiguration {
#Override
#Bean
public RestHighLevelClient elasticsearchClient() {
final ClientConfiguration clientConfiguration =
ClientConfiguration
.builder()
.connectedTo("206.189.178.228:9200")
.build();
return RestClients.create(clientConfiguration).rest();
}
and this is application-prod.yml
spring:
data:
jest:
uri: http://206.189.178.228:9200
elasticsearch:
rest:
uris: http://206.189.178.228:9200

kafka SASL_PLAIN SCRAM is fail in spring boot consumer

I try kafka authentication SASL_PLAINTEXT / SCRAM but Authentication failed in spring boot.
i try change SASL_PLAINTEXT / PLAIN and it's working. but SCRAM is Authentication failed SHA-512 and SHA-256
did many diffrent things but it's not working....
how can i fix it?
broker log
broker1 | [2020-12-31 02:57:37,831] INFO [SocketServer brokerId=1] Failed authentication with /172.29.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512) (org.apache.kafka.common.network.Selector)
broker2 | [2020-12-31 02:57:37,891] INFO [SocketServer brokerId=2] Failed authentication with /172.29.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512) (org.apache.kafka.common.network.Selector)
Spring boot log
2020-12-31 11:57:37.438 INFO 82416 --- [ restartedMain] o.a.k.c.s.authenticator.AbstractLogin : Successfully logged in.
2020-12-31 11:57:37.497 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.6.0
2020-12-31 11:57:37.499 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 62abe01bee039651
2020-12-31 11:57:37.499 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1609383457495
2020-12-31 11:57:37.502 INFO 82416 --- [ restartedMain] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Subscribed to topic(s): test
2020-12-31 11:57:37.508 INFO 82416 --- [ restartedMain] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService
2020-12-31 11:57:37.528 INFO 82416 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2020-12-31 11:57:37.546 INFO 82416 --- [ restartedMain] i.m.k.p.KafkaProducerScramApplication : Started KafkaProducerScramApplication in 2.325 seconds (JVM running for 3.263)
2020-12-31 11:57:37.833 INFO 82416 --- [ntainer#0-0-C-1] o.apache.kafka.common.network.Selector : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Failed authentication with localhost/127.0.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512)
2020-12-31 11:57:37.836 ERROR 82416 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Connection to node -1 (localhost/127.0.0.1:9091) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512
2020-12-31 11:57:37.837 WARN 82416 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Bootstrap broker localhost:9091 (id: -1 rack: null) disconnected
2020-12-31 11:57:37.842 ERROR 82416 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Consumer exception
java.lang.IllegalStateException: This error handler cannot process 'org.apache.kafka.common.errors.SaslAuthenticationException's; no record information is available
at org.springframework.kafka.listener.SeekUtils.seekOrRecover(SeekUtils.java:151) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.SeekToCurrentErrorHandler.handle(SeekToCurrentErrorHandler.java:113) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.handleConsumerException(KafkaMessageListenerContainer.java:1425) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1122) ~[spring-kafka-2.6.4.jar:2.6.4]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512
my docker-compose.yml
...
...
zookeeper3:
image: confluentinc/cp-zookeeper:6.0.1
hostname: zookeeper3
container_name: zookeeper3
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper3:2183
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://zookeeper:2183
ZOOKEEPER_CLIENT_PORT: 2183
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 3
KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/secrets/sasl/zookeeper_jaas.conf \
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider \
-Dzookeeper.authProvider.2=org.apache.zookeeper.server.auth.DigestAuthenticationProvider \
-Dquorum.auth.enableSasl=true \
-Dquorum.auth.learnerRequireSasl=true \
-Dquorum.auth.serverRequireSasl=true \
-Dquorum.auth.learner.saslLoginContext=QuorumLearner \
-Dquorum.auth.server.saslLoginContext=QuorumServer \
-Dquorum.cnxn.threads.size=20 \
-DrequireClientAuthScheme=sasl"
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
broker1:
image: confluentinc/cp-kafka:6.0.1
hostname: broker1
container_name: broker1
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- "9091:9091"
- "9101:9101"
- "29091:29091"
expose:
- "29090"
environment:
KAFKA_OPTS: "-Dzookeeper.sasl.client=true -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf"
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper1:2181,zookeeper2:2182,zookeeper3:2183'
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT,SASL_PLAINHOST:SASL_PLAINTEXT
KAFKA_LISTENERS: INSIDE://:29090,OUTSIDE://:29091,SASL_PLAINHOST://:9091
KAFKA_ADVERTISED_LISTENERS: INSIDE://broker1:29090,OUTSIDE://localhost:29091,SASL_PLAINHOST://localhost:9091
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_SECURITY_INTER_BROKER_PROTOCAL: SASL_PLAINTEXT
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAINTEXT
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
broker2:
image: confluentinc/cp-kafka:6.0.1
hostname: broker2
container_name: broker2
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- "9092:9092"
- "9102:9102"
- "29092:29092"
expose:
- "29090"
environment:
KAFKA_OPTS: "-Dzookeeper.sasl.client=true -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf"
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper1:2181,zookeeper2:2182,zookeeper3:2183'
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT,SASL_PLAINHOST:SASL_PLAINTEXT
KAFKA_LISTENERS: INSIDE://:29090,OUTSIDE://:29092,SASL_PLAINHOST://:9092
KAFKA_ADVERTISED_LISTENERS: INSIDE://broker2:29090,OUTSIDE://localhost:29092,SASL_PLAINHOST://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9102
KAFKA_JMX_HOSTNAME: localhost
KAFKA_SECURITY_INTER_BROKER_PROTOCAL: SASL_PLAINTEXT
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAINTEXT
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
kaka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="password"
user_admin="password"
user_client="password";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="password";
};
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="client"
password="password";
};
zookeeper_jaas.conf
Server {
org.apache.kafka.common.security.plain.PlainLoginModule required
user_admin="password";
};
QuorumServer {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="password";
};
QuorumLearner {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="admin"
password="password";
};
ConsumerConfig.java
private static final String BOOTSTRAP_ADDRESS = "localhost:9091,localhost:9092";
private static final String JAAS_TEMPLATE = "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"%s\" password=\"%s\";";
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
String jaasCfg = String.format(JAAS_TEMPLATE, "client", "password");
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_ADDRESS);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, "1000");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "Test-Consumer");
props.put("sasl.jaas.config", jaasCfg);
props.put("sasl.mechanism", "SCRAM-SHA-512");
props.put("security.protocol", "SASL_PLAINTEXT");
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
solved.
because i didn't add user information in zookeeper.
add this code.
zookeeper-add-kafka-users:
image: confluentinc/cp-kafka:6.0.1
container_name: "zookeeper-add-kafka-users"
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
command: "bash -c 'echo Waiting for Zookeeper to be ready... && \
cub zk-ready zookeeper1:2181 120 && \
cub zk-ready zookeeper2:2182 120 && \
cub zk-ready zookeeper3:2183 120 && \
kafka-configs --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-512=[iterations=4096,password=password]' --entity-type users --entity-name admin && \
kafka-configs --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-512=[iterations=4096,password=password]' --entity-type users --entity-name client '"
environment:
KAFKA_BROKER_ID: ignored
KAFKA_ZOOKEEPER_CONNECT: ignored
KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf
volumes:
- /home/mysend/dev/docker/kafka/sasl:/etc/kafka/secrets/sasl
kafka SASL_PLAIN SCRAM
if don't use docker can use command
bin/kafka-configs --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin

Pass URL from K8s-ConfigMap to SpringBoot Application

I'm quite new to Kubernetes and started using MiniKube to experiment with K8s.
Explanation
This is the (very simple) scenario I want to realize:
PrintClient provides a http-endpoint: <url>/<stringToPrint>
PrintClient redirects <stringToPrint> to PrintServer
PrintServer just print out <stringToPrint> in console
The scenario is meant as a proof-of-concept, and will be the basis for a evaluation in the company I'm working at, therefore it needs to accomplish the following tasks:
both applications are running inside the MiniKube-cluster
PrintClient should have an ExternalService
PrintServer should have an InternalService
the URL for PrintServer should come from a ConfigMap using
PrintServer's InternalService
Code
Here are the relevant files for my scenario.
At first I show you the code of both applications:
PrintClient
PrintClientApplication.java
#SpringBootApplication
public class PrintClientApplication {
public static void main(String[] args) {
SpringApplication.run(PrintClientApplication.class, args);
}
#Bean
public RestTemplate getRestTemplate(RestTemplateBuilder builder) {
return builder.build();
}
}
Api.java
#RestController
public class Api {
#Autowired
private RestTemplate rs;
#Value("${PRINT_SERVER_URL}")
private String printServerUrl;
#GetMapping("/{string}")
public void print(#PathVariable("string")String string) {
System.out.println(printServerUrl);
rs.getForObject( printServerUrl + string, String.class);
}
}
application.properties
PRINT_SERVER_URL=thisIsJustAPlaceholder
Dockerfile
FROM java:8-jdk-alpine
COPY /target/print_client-0.0.1-SNAPSHOT.jar /usr/app/
ENV PRINT_SERVER_URL=blah
WORKDIR /usr/app/
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "print_client-0.0.1-SNAPSHOT.jar", "PRINT_SERVER_URL=${PRINT_SERVER_URL}"]
deployment+service
apiVersion: apps/v1
kind: Deployment
metadata:
name: print-client-deployment
spec:
selector:
matchLabels:
app: print-client-label
replicas: 1
template:
metadata:
labels:
app: print-client-label
spec:
containers:
- name: print-client
image: patrickshaikh/print_client:1
ports:
- containerPort: 8080
env:
- name: PRINT_SERVER_URL
valueFrom:
configMapKeyRef:
name: print-client-configmap
key: print_server_url
---
apiVersion: v1
kind: Service
metadata:
name: print-client-service
spec:
selector:
app: print-client-label
type: LoadBalancer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30001
configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: print-client-configmap
data:
print_server_url: print-server-service
PrintServer
Api.java
#RestController
public class Api {
#GetMapping("/{string}")
public void hello(#PathVariable("string") String string) {
System.out.println(string);
}
}
deployment+service
apiVersion: apps/v1
kind: Deployment
metadata:
name: print-server-deployment
spec:
selector:
matchLabels:
app: print-server
replicas: 1
template:
metadata:
labels:
app: print-server
spec:
containers:
- name: print-server
image: patrickshaikh/print_server:1
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: print-server-service
spec:
selector:
app: print-server
type: LoadBalancer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30000
(It is an ExternalService for testing purpose, but will be internal later.)
I already checked if CoreDNS is running:
Misbehaviour
I used the name of the print-server-service in the ConfigMap but instead of resolving it to an URL before passing the value to the container on initialization, it treats it as a string (during a testiteration I printed the URL to console as well).
Using the IP of PrintServers InternalService results in a 404 while using the IP of its ExternalService works flawlessly.
Questions at answerers
Why can't it resolve the reference to the print-server-service to
its internal IP?
What would be the proper way of passing URLs to a container using
servicenames?
Is there anything else in terms of "best practice" in my scenario i
could adapt?
Thanks in advance

Redis & Spring Boot integration with K8S error

I have the following docker file:
FROM openjdk:8-jdk-alpine
ENV PORT 8094
EXPOSE 8094
RUN mkdir -p /app/
COPY build/libs/fqdn-cache-service.jar /app/fqdn-cache-service.jar
WORKDIR /build
ENTRYPOINT [ "sh" "-c", "java -jar /app/fqdn-cache-service.jar" ]
docker-compose.yaml file:
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
image: fqdn-cache-service
ports:
- "8094:8094"
links:
- "db:redis"
db:
image: "redis:alpine"
#hostname: redis
ports:
- "6378:6378"
deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: fqdn-cache-service
spec:
selector:
matchLabels:
run: spike
replicas: 1
template:
metadata:
labels:
app: redis
run: spike
spec:
containers:
- name: fqdn-cache-service
imagePullPolicy: Never
image: fqdn-cache-service:latest
ports:
- containerPort: 8094
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
selector:
matchLabels:
run: spike
replicas: 1
template:
metadata:
labels:
run: spike
spec:
hostname: redis
containers:
- name: redis
image: redis:alpine
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: fqdn-cache-service
labels:
run: spike
spec:
type: NodePort
ports:
- port: 8094
nodePort: 30001
selector:
run: spike
---
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
run: spike
app: redis
spec:
type: NodePort
ports:
- port: 6379
nodePort: 30002
selector:
run: spike
And the cluster info ip is 127.0.0.1.
I'm using microk8s over ubuntu OS.
If I request for get by and ID (127.0.0.1/webapi/users/1) I get the error:
Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
Although on regular java application with redis or dockerize spring boot with redis it's working.
Any help why this is happened?
This is the configuration of the spring boot:
#Configuration
public class ApplicationConfig {
#Bean
JedisConnectionFactory jedisConnectionFactory() {
JedisConnectionFactory factory = new JedisConnectionFactory();
factory.setHostName("127.0.0.1");
factory.setPort(30001);
factory.setUsePool(true);
return factory;
}
#Bean
RedisTemplate redisTemplate() {
RedisTemplate<String, FqdnMapping> redisTemplate = new RedisTemplate<String, FqdnMapping>();
redisTemplate.setConnectionFactory(jedisConnectionFactory());
return redisTemplate;
}
}
The issue also happenes if the host name is localhost and\or the port is 6379...
Thanks!
When you're running in a container, 127.0.0.1 usually refers to the container itself, not to the host the container is running on. If you're trying to connect to a service, try using its name and port: "redis" on port 6379 and "fqdn-cache-service" on 8094.

Resources