Spring boot cannot connect with Elasticsearch image on server - spring-boot

i have spring boot project and using Elasticsearch inside it
i am run Elasticsearch by docker image
i think backend code cannot connect with Elasticsearch
but when run production in server i have following error
Caused by: org.springframework.data.elasticsearch.ElasticsearchException: Error while for indexExists request: org.elasticsearch.action.admin.indices.get.GetIndexRequest#446b74b9
at org.springframework.data.elasticsearch.core.ElasticsearchRestTemplate.indexExists(ElasticsearchRestTemplate.java:842)
at org.springframework.data.elasticsearch.core.ElasticsearchRestTemplate.createIndexIfNotCreated(ElasticsearchRestTemplate.java:1219)
at org.springframework.data.elasticsearch.core.ElasticsearchRestTemplate.createIndex(ElasticsearchRestTemplate.java:251)
at org.springframework.data.elasticsearch.repository.support.AbstractElasticsearchRepository.createIndex(AbstractElasticsearchRepository.java:99)
at org.springframework.data.elasticsearch.repository.support.AbstractElasticsearchRepository.<init>(AbstractElasticsearchRepository.java:89)
at org.springframework.data.elasticsearch.repository.support.SimpleElasticsearchRepository.<init>(SimpleElasticsearchRepository.java:39)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.base/java.lang.reflect.Constructor.newInstance(Unknown Source)
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:204)
... 69 common frames omitted
Caused by: java.net.ConnectException: Timeout connecting to [/206.189.178.228:9200]
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:959)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:233)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1764)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1734)
at org.elasticsearch.client.IndicesClient.exists(IndicesClient.java:1109)
at org.springframework.data.elasticsearch.core.ElasticsearchRestTemplate.indexExists(ElasticsearchRestTemplate.java:840)
... 79 common frames omitted
Caused by: java.net.ConnectException: Timeout connecting to [/206.189.178.228:9200]
at org.apache.http.nio.pool.RouteSpecificPool.timeout(RouteSpecificPool.java:169)
at org.apache.http.nio.pool.AbstractNIOConnPool.requestTimeout(AbstractNIOConnPool.java:632)
at org.apache.http.nio.pool.AbstractNIOConnPool$InternalSessionRequestCallback.timeout(AbstractNIOConnPool.java:898)
at org.apache.http.impl.nio.reactor.SessionRequestImpl.timeout(SessionRequestImpl.java:198)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processTimeouts(DefaultConnectingIOReactor.java:213)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:158)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351)
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221)
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
at java.base/java.lang.Thread.run(Unknown Source)
this is elasticSearch.yml
version: '2'
services:
searchservice-elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.1.0
# volumes:
# - ~/volumes/jhipster/SearchService/elasticsearch/:/usr/share/elasticsearch/data/
ports:
- 9200:9200
- 9300:9300
environment:
- 'ES_JAVA_OPTS=-Xms1024m -Xmx1024m'
- 'discovery.type=single-node'
this is searchService.yml
version: '2'
services:
food-search-service:
image: altshiftcreative/food-app-search-service:v1.6
environment:
# - _JAVA_OPTIONS=-Xmx512m -Xms256m
- SPRING_PROFILES_ACTIVE=prod,swagger
- MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED=false
- EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/eureka
- SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/config
- SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_OIDC_ISSUER_URI=https://shopbia.shop/auth/realms/jhipster
- SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_OIDC_CLIENT_ID=internal
- SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_OIDC_CLIENT_SECRET=internal
- SPRING_DATA_JEST_URI=http://206.189.178.228:9200
- SPRING_ELASTICSEARCH_REST_URIS=http://0.0.0.0:9200
- xpack.security.http.ssl.enabled=false
- elasticsearch.host=http://206.189.178.228:9200
# - JHIPSTER_SLEEP=30 # gives time for other services to boot before the application
- KAFKA_BOOTSTRAPSERVERS=kafka:9092
ports:
- 8088:8088
networks:
- food_default
networks:
food_default:
external: true
and this is elastic search configuration inside my project
public class ElasticsearchConfiguration extends AbstractElasticsearchConfiguration {
#Override
#Bean
public RestHighLevelClient elasticsearchClient() {
final ClientConfiguration clientConfiguration =
ClientConfiguration
.builder()
.connectedTo("206.189.178.228:9200")
.build();
return RestClients.create(clientConfiguration).rest();
}
and this is application-prod.yml
spring:
data:
jest:
uri: http://206.189.178.228:9200
elasticsearch:
rest:
uris: http://206.189.178.228:9200

Related

Springboot docker-compose redis Cannot get Jedis connection;

This may seems as duplicate of docker-compose with springboot and redis and docker-compose redis connection issue but the solutions proposed there are not working in my case.
I am using springboot and the other services are unable to connect to redis service.
Below is my docker compose file:
version: "3.7"
services:
...
users:
build: ./users
ports:
- "8081:8081"
networks:
- jiji-microservices-network
depends_on:
- registry
- gateway
- redis_cache
- postgresDB
- rabbitmq
links:
- redis_cache
- postgresDB
- rabbitmq
environment:
# SPRING_CACHE_TYPE: redis
# SPRING_REDIS_HOST: redis_cache
# SPRING_REDIS_PORT: 6379
# SPRING_REDIS_PASSWORD:
SPRING_DATASOURCE_URL: jdbc:postgresql://postgresDB:5432/jiji_users
SPRING_DATASOURCE_USERNAME: postgres
SPRING_DATASOURCE_PASSWORD: postgres
SPRING_JPA_HIBERNATE_DDL_AUTO: update
EUREKA_CLIENT_SERVICEURL_DEFAULTZONE: http://registry:8090/eureka
postgresDB:
image: postgres
restart: unless-stopped
ports:
- "5432:5432"
networks:
- jiji-microservices-network
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: jiji_users
redis_cache:
image: redis:latest
restart: on-failure
command: ["redis-server","--bind","redis_cache","--port","6379"]
ports:
- "6379:6379"
networks:
- jiji-microservices-network
rabbitmq:
image: rabbitmq:management
ports:
- "5672:5672"
- "15672:15672"
networks:
- jiji-microservices-network
networks:
jiji-microservices-network:
driver: bridge
Here below is my application.yml file:
...
cache:
type: redis
redis:
host: redis_cache
port: 6379
# cache-null-values: true
time-to-live: 2592000 #30 days
...
The error message I am getting:
users_1 | 2022-03-28 02:53:40.417 INFO 1 --- [nio-8081-exec-4] c.o.u.s.CustomAuthorizationFilter : /api/v1/users/getCode
users_1 | Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
docker ps shows that all the containers are up and running and that redis is listening on port 6379.
PS: It works fine without docker.
The issue was related to my Redisconfiguration. Fixed bit adding redis host and redis port to JedisConnectionFactory Bean.
#Configuration
#Slf4j
public class RedisConfiguration {
#Value("${spring.redis.host}")
private String REDIS_HOST;
#Value("${spring.redis.port}")
private Integer REDIS_PORT;
#Bean
public JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration config = new RedisStandaloneConfiguration(REDIS_HOST, REDIS_PORT);
return new JedisConnectionFactory(config);
}
#Bean
public RedisTemplate<String, Object> redisTemplate() {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setEnableTransactionSupport(true);
template.setConnectionFactory(jedisConnectionFactory());
return template;
}
}

Unable to connect to Redis; nested exception is io.lettuce.core.RedisConnectionException:

I'm new to docker and I'm trying to run redis-server and my springboot app both on a container.
I was able to hit redis(present in a docker container) when I start the springboot app locally just fine, but when i put this springboot app also in the docker container then I'm unable to connect to redis and getting
Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect to 0.0.0.0:6397] with root cause
urlshortner |
urlshortner | java.net.ConnectException: Connection refused
urlshortner | at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:na]
urlshortner | at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779) ~[na:na]
urlshortner | at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330) ~[netty-transport-4.1.45.Final.jar!/:4.1.45.Final]
urlshortner | at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) ~[netty-transport-4.1.45.Final.jar!/:4.1.45.Final]
urlshortner | at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702) ~[netty-transport-4.1.45.Final.jar!/:4.1.45.Final]
urlshortner | at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) ~[netty-transport-4.1.45.Final.jar!/:4.1.45.Final]
urlshortner | at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) ~[netty-transport-4.1.45.Final.jar!/:4.1.45.Final]
urlshortner | at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-transport-4.1.45.Final.jar!/:4.1.45.Final]
urlshortner | at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.45.Final.jar!/:4.1.45.Final]
urlshortner | at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.45.Final.jar!/:4.1.45.Final]
urlshortner | at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.45.Final.jar!/:4.1.45.Final]
urlshortner | at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
I have tried following:
used docker-compose to get them onto same network
my docker-compose
version: '3'
services:
app:
container_name: urlshortner
image: docker-urlshortner:v1
build: .
links:
- redis
ports:
- "10095:10095"
volumes:
- ~/docker/redis:/urlshortner/logs
redis:
container_name: myredis
image: redis:v1
build: ./redis
hostname: localhost
ports:
- "6379:6379"
dockerfile to start springboot app
FROM adoptopenjdk/openjdk11
VOLUME /urlshortner
ARG JAR_FILE=target/Urlshortning-0.0.1-SNAPSHOT.jar
ADD ${JAR_FILE} urlshortning.jar
EXPOSE 10095
ENTRYPOINT ["java", "-jar", "/urlshortning.jar"]
dockerfile for running redis
FROM redis
COPY redis.conf /redis/redis.conf
CMD [ "redis-server", "/redis/redis.conf" ]
commented out bind 127.0.0.1 and added bind 0.0.0.0 in redis.conf
but still getting same error
my redis config in java app
#Configuration
public class RedisConfig {
private final String url;
private final int port;
private final String password;
#Autowired
private ObjectMapper objectMapper;
public RedisConfig(#Value("${spring.redis.host}") String url, #Value("${spring.redis.port}") int port,
#Value("${spring.redis.password}") String password) {
this.url = url;
this.port = port;
this.password = password;
}
/**
* Redis configuration
*
* #return redisStandaloneConfiguration
*/
#Bean
public RedisStandaloneConfiguration redisStandaloneConfiguration() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration(url, port);
redisStandaloneConfiguration.setPassword(password);
return redisStandaloneConfiguration;
}
/**
* Client Options Reject requests when redis is in disconnected state and Redis
* will retry to connect automatically when redis server is down
*
* #return client options
*/
#Bean
public ClientOptions clientOptions() {
return ClientOptions.builder().disconnectedBehavior(ClientOptions.DisconnectedBehavior.REJECT_COMMANDS)
.autoReconnect(true).build();
}
/**
* Create a LettuceConnection with redis configurations and client options
*
* #param redisStandaloneConfiguration redisStandaloneConfiguration
* #return RedisConnectionFactory
*/
#Bean
public RedisConnectionFactory connectionFactory(RedisStandaloneConfiguration redisStandaloneConfiguration) {
LettuceClientConfiguration configuration = LettuceClientConfiguration.builder().clientOptions(clientOptions())
.build();
return new LettuceConnectionFactory(redisStandaloneConfiguration, configuration);
}
// Setting up the redis template object.
#SuppressWarnings({ "rawtypes", "unchecked" })
#Bean
#ConditionalOnMissingBean(name = "redisTemplate")
#Primary
public RedisTemplate<String, Url> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Url.class);
jackson2JsonRedisSerializer.setObjectMapper(objectMapper);
RedisTemplate<String, Url> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(redisConnectionFactory);
redisTemplate.setKeySerializer(new StringRedisSerializer());
redisTemplate.setValueSerializer(jackson2JsonRedisSerializer);
return redisTemplate;
}
}
and my application.properties
server.port=10095
redis.ttl=86400
spring.redis.host=localhost
spring.redis.port=6379
spring.redis.password=redisdb
i have also tried changing spring.redis.host from localhost to 0.0.0.0 and 127.0.0.1
so when i hit it from postman I get the error
both containers running redis and springboot app
How do i resolve this problem, any help is appreciated, thanks
I changed following which made it work
spring.redis.host=localhost
to
spring.redis.host=redis
in application.properties
and
hostname: localhost
to
hostname: redis
in docker-compose for redis

kafka SASL_PLAIN SCRAM is fail in spring boot consumer

I try kafka authentication SASL_PLAINTEXT / SCRAM but Authentication failed in spring boot.
i try change SASL_PLAINTEXT / PLAIN and it's working. but SCRAM is Authentication failed SHA-512 and SHA-256
did many diffrent things but it's not working....
how can i fix it?
broker log
broker1 | [2020-12-31 02:57:37,831] INFO [SocketServer brokerId=1] Failed authentication with /172.29.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512) (org.apache.kafka.common.network.Selector)
broker2 | [2020-12-31 02:57:37,891] INFO [SocketServer brokerId=2] Failed authentication with /172.29.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512) (org.apache.kafka.common.network.Selector)
Spring boot log
2020-12-31 11:57:37.438 INFO 82416 --- [ restartedMain] o.a.k.c.s.authenticator.AbstractLogin : Successfully logged in.
2020-12-31 11:57:37.497 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.6.0
2020-12-31 11:57:37.499 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 62abe01bee039651
2020-12-31 11:57:37.499 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1609383457495
2020-12-31 11:57:37.502 INFO 82416 --- [ restartedMain] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Subscribed to topic(s): test
2020-12-31 11:57:37.508 INFO 82416 --- [ restartedMain] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService
2020-12-31 11:57:37.528 INFO 82416 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2020-12-31 11:57:37.546 INFO 82416 --- [ restartedMain] i.m.k.p.KafkaProducerScramApplication : Started KafkaProducerScramApplication in 2.325 seconds (JVM running for 3.263)
2020-12-31 11:57:37.833 INFO 82416 --- [ntainer#0-0-C-1] o.apache.kafka.common.network.Selector : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Failed authentication with localhost/127.0.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512)
2020-12-31 11:57:37.836 ERROR 82416 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Connection to node -1 (localhost/127.0.0.1:9091) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512
2020-12-31 11:57:37.837 WARN 82416 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Bootstrap broker localhost:9091 (id: -1 rack: null) disconnected
2020-12-31 11:57:37.842 ERROR 82416 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Consumer exception
java.lang.IllegalStateException: This error handler cannot process 'org.apache.kafka.common.errors.SaslAuthenticationException's; no record information is available
at org.springframework.kafka.listener.SeekUtils.seekOrRecover(SeekUtils.java:151) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.SeekToCurrentErrorHandler.handle(SeekToCurrentErrorHandler.java:113) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.handleConsumerException(KafkaMessageListenerContainer.java:1425) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1122) ~[spring-kafka-2.6.4.jar:2.6.4]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512
my docker-compose.yml
...
...
zookeeper3:
image: confluentinc/cp-zookeeper:6.0.1
hostname: zookeeper3
container_name: zookeeper3
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper3:2183
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://zookeeper:2183
ZOOKEEPER_CLIENT_PORT: 2183
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 3
KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/secrets/sasl/zookeeper_jaas.conf \
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider \
-Dzookeeper.authProvider.2=org.apache.zookeeper.server.auth.DigestAuthenticationProvider \
-Dquorum.auth.enableSasl=true \
-Dquorum.auth.learnerRequireSasl=true \
-Dquorum.auth.serverRequireSasl=true \
-Dquorum.auth.learner.saslLoginContext=QuorumLearner \
-Dquorum.auth.server.saslLoginContext=QuorumServer \
-Dquorum.cnxn.threads.size=20 \
-DrequireClientAuthScheme=sasl"
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
broker1:
image: confluentinc/cp-kafka:6.0.1
hostname: broker1
container_name: broker1
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- "9091:9091"
- "9101:9101"
- "29091:29091"
expose:
- "29090"
environment:
KAFKA_OPTS: "-Dzookeeper.sasl.client=true -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf"
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper1:2181,zookeeper2:2182,zookeeper3:2183'
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT,SASL_PLAINHOST:SASL_PLAINTEXT
KAFKA_LISTENERS: INSIDE://:29090,OUTSIDE://:29091,SASL_PLAINHOST://:9091
KAFKA_ADVERTISED_LISTENERS: INSIDE://broker1:29090,OUTSIDE://localhost:29091,SASL_PLAINHOST://localhost:9091
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_SECURITY_INTER_BROKER_PROTOCAL: SASL_PLAINTEXT
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAINTEXT
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
broker2:
image: confluentinc/cp-kafka:6.0.1
hostname: broker2
container_name: broker2
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- "9092:9092"
- "9102:9102"
- "29092:29092"
expose:
- "29090"
environment:
KAFKA_OPTS: "-Dzookeeper.sasl.client=true -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf"
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper1:2181,zookeeper2:2182,zookeeper3:2183'
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT,SASL_PLAINHOST:SASL_PLAINTEXT
KAFKA_LISTENERS: INSIDE://:29090,OUTSIDE://:29092,SASL_PLAINHOST://:9092
KAFKA_ADVERTISED_LISTENERS: INSIDE://broker2:29090,OUTSIDE://localhost:29092,SASL_PLAINHOST://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9102
KAFKA_JMX_HOSTNAME: localhost
KAFKA_SECURITY_INTER_BROKER_PROTOCAL: SASL_PLAINTEXT
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAINTEXT
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
kaka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="password"
user_admin="password"
user_client="password";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="password";
};
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="client"
password="password";
};
zookeeper_jaas.conf
Server {
org.apache.kafka.common.security.plain.PlainLoginModule required
user_admin="password";
};
QuorumServer {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="password";
};
QuorumLearner {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="admin"
password="password";
};
ConsumerConfig.java
private static final String BOOTSTRAP_ADDRESS = "localhost:9091,localhost:9092";
private static final String JAAS_TEMPLATE = "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"%s\" password=\"%s\";";
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
String jaasCfg = String.format(JAAS_TEMPLATE, "client", "password");
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_ADDRESS);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, "1000");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "Test-Consumer");
props.put("sasl.jaas.config", jaasCfg);
props.put("sasl.mechanism", "SCRAM-SHA-512");
props.put("security.protocol", "SASL_PLAINTEXT");
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
solved.
because i didn't add user information in zookeeper.
add this code.
zookeeper-add-kafka-users:
image: confluentinc/cp-kafka:6.0.1
container_name: "zookeeper-add-kafka-users"
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
command: "bash -c 'echo Waiting for Zookeeper to be ready... && \
cub zk-ready zookeeper1:2181 120 && \
cub zk-ready zookeeper2:2182 120 && \
cub zk-ready zookeeper3:2183 120 && \
kafka-configs --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-512=[iterations=4096,password=password]' --entity-type users --entity-name admin && \
kafka-configs --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-512=[iterations=4096,password=password]' --entity-type users --entity-name client '"
environment:
KAFKA_BROKER_ID: ignored
KAFKA_ZOOKEEPER_CONNECT: ignored
KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf
volumes:
- /home/mysend/dev/docker/kafka/sasl:/etc/kafka/secrets/sasl
kafka SASL_PLAIN SCRAM
if don't use docker can use command
bin/kafka-configs --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin

Kubernetes: Tomcat throws Exception when enabling proxy protocol

I am kind of lost right now. I set up a Kubernetes Cluster, deployed a Spring Boot API and a LoadBalancer which worked fine. Now I want to enable proxy protocol on the LoadBalancer to preserve the real clients IP, but once I do this my Spring Boot API always returns with a 400 Bad Request and an IllegalArgumentException is thrown.
Here is the short stack trace (I masked the ip addresses):
2020-09-29 20:05:58.382 INFO 1 --- [nio-8080-exec-1] o.apache.coyote.http11.Http11Processor : Error parsing HTTP request header
Note: further occurrences of HTTP request parsing errors will be logged at DEBUG level.
java.lang.IllegalArgumentException: Invalid character found in the HTTP protocol [255.255.255.253 255.255.255.254]
at org.apache.coyote.http11.Http11InputBuffer.parseRequestLine(Http11InputBuffer.java:560) ~[tomcat-embed-core-9.0.37.jar!/:9.0.37]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:260) ~[tomcat-embed-core-9.0.37.jar!/:9.0.37]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) ~[tomcat-embed-core-9.0.37.jar!/:9.0.37]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868) ~[tomcat-embed-core-9.0.37.jar!/:9.0.37]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1589) ~[tomcat-embed-core-9.0.37.jar!/:9.0.37]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-embed-core-9.0.37.jar!/:9.0.37]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[na:na]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.37.jar!/:9.0.37]
at java.base/java.lang.Thread.run(Unknown Source) ~[na:na]
I am using the hcloud-cloud-controller-manager from Hetzner.
Here is my LoadBalancer:
apiVersion: v1
kind: Service
metadata:
labels:
service: auth-service
name: auth-service-service
annotations:
load-balancer.hetzner.cloud/name: "lb-backend"
load-balancer.hetzner.cloud/health-check-port: "80"
load-balancer.hetzner.cloud/uses-proxyprotocol: "true"
spec:
ports:
- name: http
port: 80
targetPort: 8080
selector:
service: auth-service
externalTrafficPolicy: Local
type: LoadBalancer
Here is my Spring Config:
spring:
datasource:
platform: postgres
url: ${DATABASE_CS}
username: ${DATABASE_USERNAME}
password: ${DATABASE_PASSWORD}
driver-class-name: org.postgresql.Driver
flyway:
schemas: authservice
jpa:
show-sql: false
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQLDialect
jdbc:
lob:
non_contextual_creation: true
hibernate:
ddl-auto: validate
security:
jwt:
secret-key: ${JWT_SECRET_KEY}
expires: ${JWT_EXPIRES:300000}
mail:
from: ${MAIL_FROM}
fromName: ${MAIL_FROM_NAME}
smtp:
host: ${SMTP_HOST}
username: ${SMTP_USERNAME}
password: ${SMTP_PASSWORD}
port: ${SMTP_PORT:25}
mjml:
app-id: ${MJML_APP_ID}
app-secret: ${MJML_SECRET_KEY}
stripe:
keys:
secret: ${STRIPE_SECRET_KEY}
public: ${STRIPE_PUBLIC_KEY}
server:
forward-headers-strategy: native
As you may have noticed I already tried to enable the forward-headers based on this issue.
Thanks for your help!
You enabled the PROXY protocol, which is not HTTP, but a different protocol to tunnel TCP connections to downstream servers, keeping as much information as possible.
https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt
I am quite sure you want to disable this
load-balancer.hetzner.cloud/uses-proxyprotocol: "true"
and instead rely on the forward headers to the the remote-address to the correct value for the client.
To be honest I am not aware that tomcat supports the PROXY protocol. (Edit: Currently is does not, see https://bz.apache.org/bugzilla/show_bug.cgi?id=57830)

Redis & Spring Boot integration with K8S error

I have the following docker file:
FROM openjdk:8-jdk-alpine
ENV PORT 8094
EXPOSE 8094
RUN mkdir -p /app/
COPY build/libs/fqdn-cache-service.jar /app/fqdn-cache-service.jar
WORKDIR /build
ENTRYPOINT [ "sh" "-c", "java -jar /app/fqdn-cache-service.jar" ]
docker-compose.yaml file:
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
image: fqdn-cache-service
ports:
- "8094:8094"
links:
- "db:redis"
db:
image: "redis:alpine"
#hostname: redis
ports:
- "6378:6378"
deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: fqdn-cache-service
spec:
selector:
matchLabels:
run: spike
replicas: 1
template:
metadata:
labels:
app: redis
run: spike
spec:
containers:
- name: fqdn-cache-service
imagePullPolicy: Never
image: fqdn-cache-service:latest
ports:
- containerPort: 8094
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
selector:
matchLabels:
run: spike
replicas: 1
template:
metadata:
labels:
run: spike
spec:
hostname: redis
containers:
- name: redis
image: redis:alpine
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: fqdn-cache-service
labels:
run: spike
spec:
type: NodePort
ports:
- port: 8094
nodePort: 30001
selector:
run: spike
---
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
run: spike
app: redis
spec:
type: NodePort
ports:
- port: 6379
nodePort: 30002
selector:
run: spike
And the cluster info ip is 127.0.0.1.
I'm using microk8s over ubuntu OS.
If I request for get by and ID (127.0.0.1/webapi/users/1) I get the error:
Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
Although on regular java application with redis or dockerize spring boot with redis it's working.
Any help why this is happened?
This is the configuration of the spring boot:
#Configuration
public class ApplicationConfig {
#Bean
JedisConnectionFactory jedisConnectionFactory() {
JedisConnectionFactory factory = new JedisConnectionFactory();
factory.setHostName("127.0.0.1");
factory.setPort(30001);
factory.setUsePool(true);
return factory;
}
#Bean
RedisTemplate redisTemplate() {
RedisTemplate<String, FqdnMapping> redisTemplate = new RedisTemplate<String, FqdnMapping>();
redisTemplate.setConnectionFactory(jedisConnectionFactory());
return redisTemplate;
}
}
The issue also happenes if the host name is localhost and\or the port is 6379...
Thanks!
When you're running in a container, 127.0.0.1 usually refers to the container itself, not to the host the container is running on. If you're trying to connect to a service, try using its name and port: "redis" on port 6379 and "fqdn-cache-service" on 8094.

Resources