Spring data cassandra - error while opening new channel - spring-boot

I have a problem with Cassandra's connection with spring-data. When Cassandra is running locally I have no problem with connecting, however when I ran my spring-boot app in k8s with external Cassandra I am stuck on WARN:
2020-07-24 10:26:32.398 WARN 6 --- [ s0-admin-0] c.d.o.d.internal.core.pool.ChannelPool : [s0|/127.0.0.1:9042] Error while opening new channel (ConnectionInitException: [s0|connecting...] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.7.2, CLIENT_ID=9679ee85-ff39-45b6-8573-62a8d827ec9e}): failed to send request (java.nio.channels.ClosedChannelException))
I don't understand why in the log I have [s0|/127.0.0.1:9042] instead of the IP of my contact points.
Spring configuration:
spring:
data:
cassandra:
keyspace-name: event_store
local-datacenter: datacenter1
contact-points: host1:9042,host2:9042
Also this WARN is not causing that spring-boot won't start however if I do query in service I have error:
{ error: "Query; CQL [com.datastax.oss.driver.internal.core.cql.DefaultSimpleStatement#9463dccc]; No node was available to execute the query; nested exception is com.datastax.oss.driver.api.core.NoNodeAvailableException: No node was available to execute the query" }

Option 1: test your yml file like this. (Have you tried with ip address?)
data:
cassandra:
keyspace-name: event_store
local-datacenter: datacenter1
port:9042
contact-points: host1,host2
username: cassandra
password: cassandra
Option 2: Create new properties on your yml and than a configuration class
cassandra:
database:
keyspace-name: event_store
contact-points: host1, host2
port: 9042
username: cassandra
password: cassandra
#Configuration
public class CassandraConfig extends AbstractCassandraConfiguration {
#Value("${cassandra.database.keyspace-name}")
private String keySpace;
#Value("${cassandra.database.contact-points}")
private String contactPoints;
#Value("${cassandra.database.port}")
private int port;
#Value("${cassandra.database.username}")
private String userName;
#Value("${cassandra.database.password}")
private String password;
#Override
protected String getKeyspaceName() {
return keySpace;
}
#Bean
public CassandraMappingContext cassandraMapping() throws ClassNotFoundException {
CassandraMappingContext context = new CassandraMappingContext();
context.setUserTypeResolver(new SimpleUserTypeResolver(cluster().getObject(), keySpace));
return context;
}
#Bean
public CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean cluster = super.cluster();
cluster.setUsername(userName);
cluster.setPassword(password);
cluster.setContactPoints(contactPoints);
cluster.setPort(port);
return cluster;
}
#Override
protected boolean getMetricsEnabled() {
return false;
}
}

Related

Connecting to Cassandra Cluster from Springboot application throws "AllNodesFailedException: Could not reach any contact point" Exception

Springboot application is failing to start up because it is failing to connect to the Cassandra contact points. Though the same configuration is working with localhost Cassandra setup but not with the actual Cassandra cluster. Configuration class is given below.
#Configuration
#EnableCassandraRepositories(basePackages = { "xyz.abc" })
public class CassandraConfiguration extends AbstractCassandraConfiguration {
#Value("${cassandra.contactpoints}")
private String contactPoints;
#Value("${cassandra.port}")
private int port;
#Value("${cassandra.keyspace}")
private String keySpace;
#Value("${cassandra.schema-action}")
private String schemaAction;
#Override
protected String getKeyspaceName() {
return keySpace;
}
#Override
protected String getContactPoints() {
return contactPoints;
}
#Override
protected int getPort() {
return port;
}
#Override
public SchemaAction getSchemaAction() {
return SchemaAction.valueOf(schemaAction);
}
The exception message you posted indicates that your application is not able to reach any of the nodes in your cluster.
You haven't provided details of your environment or how you've configured your cluster but the exception mostly relates to networking. You need to make sure that there is network connectivity between your app instances and all the [local] nodes of the Cassandra cluster you're connecting to. Check that there is no firewall blocking access to the CQL client port (default is 9042) on the nodes.
Also check that the Cassandra nodes are configured to listen for client connections on a publicly-reachable interface -- rpc_address in cassandra.yaml should not be set to a private address.
Use Linux utilities such as nc, telnet, netstat and lsof to assist with your troubleshooting. Cheers!

Spring Boot app cannot connect to Redis Replica in Docker

I am experiencing a weird issue with Redis connectivity in the Docker.
I have a simple Spring Boot application with a Master-Replica configuration.
As well as docker-compose config which I use to start Redis Master and Redis Replica.
If I start Redis via docker-compose and Spring Boot app as a simple java process outside of Docker, everything works fine. It can successfully connect to both Master and Replica via localhost.
If I launch Spring Boot app as a Docker container alongside Redis containers it can successfully connect to Master, but not Replica. Which means that I can write and read to Master node but when I try to read from Replica I get the following error:
redis-sample-app_1 | Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: redis-replica-a/172.31.0.2:7001
redis-sample-app_1 | Caused by: java.net.ConnectException: Connection refused
redis-sample-app_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_302]
In the redis.conf I've changed the following to bind it to all network interfaces:
bind * -::*
protected-mode no
docker-compose.yml
version: "3"
services:
redis-master:
image: redis:alpine
command: redis-server --include /usr/local/etc/redis/redis.conf
volumes:
- ./conf/redis-master.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
redis-replica-a:
image: redis:alpine
command: redis-server --include /usr/local/etc/redis/redis.conf
volumes:
- ./conf/redis-replica.conf:/usr/local/etc/redis/redis.conf
ports:
- "7001:6379"
redis-sample-app:
image: docker.io/library/redis-sample:0.0.1-SNAPSHOT
environment:
- SPRING_REDIS_HOST=redis-master
- SPRING_REDIS_PORT=6379
- SPRING_REDIS_REPLICAS=redis-replica-a:7001
ports:
- "9080:8080"
depends_on:
- redis-master
- redis-replica-a
application.yml
spring:
redis:
port: 6379
host: localhost
replicas: localhost:7001
RedisConfig.java
#Configuration
class RedisConfig {
private static final Logger LOG = LoggerFactory.getLogger(RedisConfig.class);
#Value("${spring.redis.replicas:}")
private String replicasProperty;
private final RedisProperties redisProperties;
public RedisConfig(RedisProperties redisProperties) {
this.redisProperties = redisProperties;
}
#Bean
public StringRedisTemplate masterReplicaRedisTemplate(LettuceConnectionFactory connectionFactory) {
return new StringRedisTemplate(connectionFactory);
}
#Bean
public LettuceConnectionFactory masterReplicaLettuceConnectionFactory(LettuceClientConfiguration lettuceConfig) {
LOG.info("Master: {}:{}", redisProperties.getHost(), redisProperties.getPort());
LOG.info("Replica property: {}", replicasProperty);
RedisStaticMasterReplicaConfiguration configuration = new RedisStaticMasterReplicaConfiguration(redisProperties.getHost(), redisProperties.getPort());
if (StringUtils.hasText(replicasProperty)) {
List<RedisURI> replicas = Arrays.stream(this.replicasProperty.split(",")).map(this::toRedisURI).collect(Collectors.toList());
LOG.info("Replica nodes: {}", replicas);
replicas.forEach(replica -> configuration.addNode(replica.getHost(), replica.getPort()));
}
return new LettuceConnectionFactory(configuration, lettuceConfig);
}
#Scope("prototype")
#Bean(destroyMethod = "shutdown")
ClientResources clientResources() {
return DefaultClientResources.create();
}
#Scope("prototype")
#Bean
LettuceClientConfiguration lettuceConfig(ClientResources dcr) {
ClientOptions options = ClientOptions.builder()
.timeoutOptions(TimeoutOptions.builder().fixedTimeout(Duration.of(5, ChronoUnit.SECONDS)).build())
.disconnectedBehavior(ClientOptions.DisconnectedBehavior.REJECT_COMMANDS)
.autoReconnect(true)
.build();
return LettuceClientConfiguration.builder()
.readFrom(ReadFrom.REPLICA_PREFERRED)
.clientOptions(options)
.clientResources(dcr)
.build();
}
#Bean
StringRedisTemplate redisTemplate(RedisConnectionFactory redisConnectionFactory) {
return new StringRedisTemplate(redisConnectionFactory);
}
private RedisURI toRedisURI(String url) {
String[] split = url.split(":");
String host = split[0];
int port;
if (split.length > 1) {
port = Integer.parseInt(split[1]);
} else {
port = 6379;
}
return RedisURI.create(host, port);
}
}
Please advise how to continue with troubleshooting.
When running everything (redis, replica and spring) inside the docker network you should be using port 6379 instead of 7001
7001 port can be used to connect from outside the container to it. But now you are trying to connect from container to container.
So change your environment variable to
SPRING_REDIS_REPLICAS=redis-replica-a:6379

Spring cloud stream Confluent KStream Avro Consume

I'm trying to consume confluent avro message from kafka topic as Kstream with spring boot 2.0.
I was able to consume the message as MessageChannel but not as KStream.
#Input(ORGANIZATION)
KStream<String, Organization> organizationMessageChannel();
#StreamListener
public void processOrganization(#Input(KstreamBinding.ORGANIZATION)KStream<String, Organization> organization) {
log.info("Organization Received:" + organization);
}
Exception:
Exception in thread
"pcs-7bb7b444-044d-41bb-945d-450c902337ff-StreamThread-3"
org.apache.kafka.streams.errors.StreamsException: stream-thread
[pcs-7bb7b444-044d-41bb-945d-450c902337ff-StreamThread-3] Failed to
rebalance. at
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:860)
at
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:808)
at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)
Caused by: org.apache.kafka.streams.errors.StreamsException: Failed to
configure value serde class
io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde at
org.apache.kafka.streams.StreamsConfig.defaultValueSerde(StreamsConfig.java:859)
at
org.apache.kafka.streams.processor.internals.AbstractProcessorContext.(AbstractProcessorContext.java:59)
at
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.(ProcessorContextImpl.java:42)
at
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:134)
at
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:404)
at
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:365)
at
org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks(StreamThread.java:350)
at
org.apache.kafka.streams.processor.internals.TaskManager.addStreamTasks(TaskManager.java:137)
at
org.apache.kafka.streams.processor.internals.TaskManager.createTasks(TaskManager.java:88)
at
org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:259)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:264)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:367)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:316)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:295)
at
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1146)
at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1111)
at
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:851)
... 3 more Caused by: io.confluent.common.config.ConfigException:
Missing required configuration "schema.registry.url" which has no
default value. at
io.confluent.common.config.ConfigDef.parse(ConfigDef.java:243) at
io.confluent.common.config.AbstractConfig.(AbstractConfig.java:78)
at
io.confluent.kafka.serializers.AbstractKafkaAvroSerDeConfig.(AbstractKafkaAvroSerDeConfig.java:61)
at
io.confluent.kafka.serializers.KafkaAvroSerializerConfig.(KafkaAvroSerializerConfig.java:32)
at
io.confluent.kafka.serializers.KafkaAvroSerializer.configure(KafkaAvroSerializer.java:48)
at
io.confluent.kafka.streams.serdes.avro.SpecificAvroSerializer.configure(SpecificAvroSerializer.java:58)
at
io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde.configure(SpecificAvroSerde.java:107)
at
org.apache.kafka.streams.StreamsConfig.defaultValueSerde(StreamsConfig.java:855)
... 19 more
Based on the error I think I'm missing to configure the schema.registry.url for confluent.
I had a quick look at the sample here
Kind of bit lost on how to do the same with spring cloud stream using the streamListener
Does this need to be a separate configuration? or Is there a way to configure schema.registry.url in application.yml itself that confluent is looking for?
here is the code repo https://github.com/naveenpop/springboot-kstream-confluent
Organization.avsc
{
"namespace":"com.test.demo.avro",
"type":"record",
"name":"Organization",
"fields":[
{
"name":"orgId",
"type":"string",
"default":"null"
},
{
"name":"orgName",
"type":"string",
"default":"null"
},
{
"name":"orgType",
"type":"string",
"default":"null"
},
{
"name":"parentOrgId",
"type":"string",
"default":"null"
}
]
}
DemokstreamApplication.java
#SpringBootApplication
#EnableSchemaRegistryClient
#Slf4j
public class DemokstreamApplication {
public static void main(String[] args) {
SpringApplication.run(DemokstreamApplication.class, args);
}
#Component
public static class organizationProducer implements ApplicationRunner {
#Autowired
private KafkaProducer kafkaProducer;
#Override
public void run(ApplicationArguments args) throws Exception {
log.info("Starting: Run method");
List<String> names = Arrays.asList("blue", "red", "green", "black", "white");
List<String> pages = Arrays.asList("whiskey", "wine", "rum", "jin", "beer");
Runnable runnable = () -> {
String rPage = pages.get(new Random().nextInt(pages.size()));
String rName = names.get(new Random().nextInt(names.size()));
try {
this.kafkaProducer.produceOrganization(rPage, rName, "PARENT", "111");
} catch (Exception e) {
log.info("Exception :" +e);
}
};
Executors.newScheduledThreadPool(1).scheduleAtFixedRate(runnable ,1 ,1, TimeUnit.SECONDS);
}
}
}
KafkaConfig.java
#Configuration
public class KafkaConfig {
#Value("${spring.cloud.stream.schemaRegistryClient.endpoint}")
private String endpoint;
#Bean
public SchemaRegistryClient confluentSchemaRegistryClient() {
ConfluentSchemaRegistryClient client = new ConfluentSchemaRegistryClient();
client.setEndpoint(endpoint);
return client;
}
}
KafkaConsumer.java
#Slf4j
#EnableBinding(KstreamBinding.class)
public class KafkaConsumer {
#StreamListener
public void processOrganization(#Input(KstreamBinding.ORGANIZATION_INPUT) KStream<String, Organization> organization) {
organization.foreach((s, organization1) -> log.info("KStream Organization Received:" + organization1));
}
}
KafkaProducer.java
#EnableBinding(KstreamBinding.class)
public class KafkaProducer {
#Autowired
private KstreamBinding kstreamBinding;
public void produceOrganization(String orgId, String orgName, String orgType, String parentOrgId) {
try {
Organization organization = Organization.newBuilder()
.setOrgId(orgId)
.setOrgName(orgName)
.setOrgType(orgType)
.setParentOrgId(parentOrgId)
.build();
kstreamBinding.organizationOutputMessageChannel()
.send(MessageBuilder.withPayload(organization)
.setHeader(KafkaHeaders.MESSAGE_KEY, orgName)
.build());
} catch (Exception e){
log.error("Failed to produce Organization Message:" +e);
}
}
}
KstreamBinding.java
public interface KstreamBinding {
String ORGANIZATION_INPUT= "organizationInput";
String ORGANIZATION_OUTPUT= "organizationOutput";
#Input(ORGANIZATION_INPUT)
KStream<String, Organization> organizationInputMessageChannel();
#Output(ORGANIZATION_OUTPUT)
MessageChannel organizationOutputMessageChannel();
}
Update 1:
I applied the suggestion from dturanski here and the error vanished. However still not able to consume the message as KStream<String, Organization> no error in the console.
Update 2:
Applied the suggestion from sobychacko here and the message is consumable with empty values in the object.
I've made a commit to the GitHub sample to produce the message from spring boot itself and still getting it as empty values.
Thanks for your time on this issue.
The following implementation will not do what you are intending:
#StreamListener
public void processOrganization(#Input(KstreamBinding.ORGANIZATION)KStream<String, Organization> organization) {
log.info("Organization Received:" + organization);
}
That log statement is only invoked once at the bootstrap phase. In order for this to work, you need to invoke some operations on the received KStream and then provide the logic there. For e.g. following works where I am providing a lambda expression on the foreach method call.
#StreamListener
public void processOrganization(#Input(KstreamBinding.ORGANIZATION) KStream<String, Organization> organization) {
organization.foreach((s, organization1) -> log.info("Organization Received:" + organization1));
}
You also have an issue in the configuration where you are wrongly assigning avro Serde for keys where it is actually a String. Change it like this:
default:
key:
serde: org.apache.kafka.common.serialization.Serdes$StringSerde
value:
serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
With these changes, I get the logging statement each time I send something to the topic. However, there is a problem in your sending groovy script, I am not getting any actual data from your Organization domain, but I will let you figure that out.
Update on the issue with the empty Organization domain object
This happens because you have a mixed mode of serialization strategies going on. You are using Spring Cloud Stream's avro message converters on the producer side but on the Kafka Streams processor, using the Confluent avro Serdes. I just tried with the Confluent's serializers all the way from producers to processor and I was able to see the Organization domain on the outbound. Here is the modified configuration to make the serialization consistent.
spring:
application:
name: kstream
cloud:
stream:
schemaRegistryClient:
endpoint: http://localhost:8081
schema:
avro:
schema-locations: classpath:avro/Organization.avsc
bindings:
organizationInput:
destination: organization-updates
group: demokstream.org
consumer:
useNativeDecoding: true
organizationOutput:
destination: organization-updates
producer:
useNativeEncoding: true
kafka:
bindings:
organizationOutput:
producer:
configuration:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
schema.registry.url: http://localhost:8081
streams:
binder:
brokers: localhost
configuration:
schema.registry.url: http://localhost:8081
commit:
interval:
ms: 1000
default:
key:
serde: org.apache.kafka.common.serialization.Serdes$StringSerde
value:
serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
You can also remove the KafkaConfig class as wells as the EnableSchemaRegistryClient annotation from the main application class.
Try spring.cloud.stream.kafka.streams.binder.configuration.schema.registry.url: ...

Spring Cloud Stream Kafka Channel Not Working in Spring Boot Application

I have been attempting to get an inbound SubscribableChannel and outbound MessageChannel working in my spring boot application.
I have successfully setup the kafka channel and tested it successfully.
Furthermore I have create a basic spring boot application that tests adding and receiving things from the channel.
The issue I am having is when I put the equivalent code in the application it belongs in, it appears that the messages never get sent or received. By debugging it's hard to ascertain what's going on but the only thing that looks different to me is the channel-name. In the working impl the channel name is like application.channel in the non working app its localhost:8080/channel.
I was wondering if there is some spring boot configuration blocking or altering the creation of the channels into a different channel source?
Anyone had any similar issues?
application.yml
spring:
datasource:
url: jdbc:h2:mem:dpemail;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
platform: h2
username: hello
password:
driverClassName: org.h2.Driver
jpa:
properties:
hibernate:
show_sql: true
use_sql_comments: true
format_sql: true
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
bindings:
email-in:
destination: email
contentType: application/json
email-out:
destination: email
contentType: application/json
Email
public class Email {
private long timestamp;
private String message;
public long getTimestamp() {
return timestamp;
}
public void setTimestamp(long timestamp) {
this.timestamp = timestamp;
}
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
}
Binding Config
#EnableBinding(EmailQueues.class)
public class EmailQueueConfiguration {
}
Interface
public interface EmailQueues {
String INPUT = "email-in";
String OUTPUT = "email-out";
#Input(INPUT)
SubscribableChannel inboundEmails();
#Output(OUTPUT)
MessageChannel outboundEmails();
}
Controller
#RestController
#RequestMapping("/queue")
public class EmailQueueController {
private EmailQueues emailQueues;
#Autowired
public EmailQueueController(EmailQueues emailQueues) {
this.emailQueues = emailQueues;
}
#RequestMapping(value = "sendEmail", method = POST)
#ResponseStatus(ACCEPTED)
public void sendToQueue() {
MessageChannel messageChannel = emailQueues.outboundEmails();
Email email = new Email();
email.setMessage("hello world: " + System.currentTimeMillis());
email.setTimestamp(System.currentTimeMillis());
messageChannel.send(MessageBuilder.withPayload(email).setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON).build());
}
#StreamListener(EmailQueues.INPUT)
public void handleEmail(#Payload Email email) {
System.out.println("received: " + email.getMessage());
}
}
I'm not sure if one of the inherited configuration projects using Spring-Cloud, Spring-Cloud-Sleuth might be preventing it from working, but even when I remove it still doesnt. But unlike my application that does work with the above code I never see the ConsumeConfig being configured, eg:
o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 100
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id = consumer-2
connections.max.idle.ms = 540000
enable.auto.commit = false
exclude.internal.topics = true
(This configuration is what I see in my basic Spring Boot application when running the above code and the code works writing and reading from the kafka channel)....
I assume there is some over spring boot configuration from one of the libraries I'm using creating a different type of channel I just cannot find what that configuration is.
What you posted contains a lot of unrelated configuration, so hard to determine if anything gets in the way. Also, when you say "..it appears that the messages never get sent or received.." are there any exceptions in the logs? Also, please state the version of Kafka you're using as well as Spring Cloud Stream.
Now, I did try to reproduce it based on your code (after cleaning up a bit to only leave relevant parts) and was able to successfully send/receive.
My Kafka version is 0.11 and Spring Cloud Stream 2.0.0.
Here is the relevant code:
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
bindings:
email-in:
destination: email
email-out:
destination: email
#SpringBootApplication
#EnableBinding(KafkaQuestionSoApplication.EmailQueues.class)
public class KafkaQuestionSoApplication {
public static void main(String[] args) {
SpringApplication.run(KafkaQuestionSoApplication.class, args);
}
#Bean
public ApplicationRunner runner(EmailQueues emailQueues) {
return new ApplicationRunner() {
#Override
public void run(ApplicationArguments args) throws Exception {
emailQueues.outboundEmails().send(new GenericMessage<String>("Hello"));
}
};
}
#StreamListener(EmailQueues.INPUT)
public void handleEmail(String payload) {
System.out.println("received: " + payload);
}
public interface EmailQueues {
String INPUT = "email-in";
String OUTPUT = "email-out";
#Input(INPUT)
SubscribableChannel inboundEmails();
#Output(OUTPUT)
MessageChannel outboundEmails();
}
}
Okay so after a lot of debugging... I discovered that something is creating a Test Support Binder (how don't know yet) so obviously this is used to not impact add messages to a real channel.
After adding
#SpringBootApplication(exclude = TestSupportBinderAutoConfiguration.class)
The kafka channel configurations have worked and messages are adding.. would be interesting to know what on earth is setting up this test support binder.. I'll find that sucker eventually.

Spring Boot elasticsearch produce warnings: Transport response handler not found of id

I setuped a web app with org.springframework.boot:spring-boot-starter-data-elasticsearch. Everything work well - I can populate indexes with my stand alone Elasticsearch 5. But I continue to receive some weird warnings:
2018-05-08 03:07:57.940 WARN 32053 --- [ient_boss][T#7]] o.e.transport.TransportService : Transport response handler not found of id [5]
2018-05-08 03:08:02.949 WARN 32053 --- [ient_boss][T#8]] o.e.transport.TransportService : Transport response handler not found of id [7]
2018-05-08 03:08:07.958 WARN 32053 --- [ient_boss][T#1]] o.e.transport.TransportService : Transport response handler not found of id [9]
2018-05-08 03:08:12.970 WARN 32053 --- [ient_boss][T#2]] o.e.transport.TransportService : Transport response handler not found of id [11]
...
Simple app to reproduce:
#SpringBootApplication
public class App {
#Configuration
#EnableElasticsearchRepositories(basePackages = "com.test")
public class EsConfig {
#Value("${elasticsearch.host}")
private String esHost;
#Value("${elasticsearch.port}")
private int esPort;
#Value("${elasticsearch.clustername}")
private String esClusterName;
#Bean
public TransportClient client() throws Exception {
Settings esSettings = Settings.builder().put("cluster.name", esClusterName).build();
InetSocketTransportAddress socketAddress = new InetSocketTransportAddress(
InetAddress.getByName(esHost), esPort);
return new PreBuiltTransportClient(esSettings).addTransportAddress(socketAddress);
}
#Bean
public ElasticsearchOperations elasticsearchTemplate(Client client) throws Exception {
return new ElasticsearchTemplate(client);
}
}
public static void main(String[] args) {
SpringApplication.run(App.class, args);
}
}
My compose file for ES
version: "2.3"
services:
elasticsearch:
image: 'docker.elastic.co/elasticsearch/elasticsearch:5.6.8
ports:
- "9200:9200"
- "9300:9300"
environment:
- xpack.security.enabled=false
- ES_JAVA_OPTS=-Xms700m -Xmx700m
- PATH_LOGS="/tmp/el-log"
- cluster.name=dou
cpu_shares: 1024
mem_limit: 1024MB
As a project transitive dependency I have org.elasticsearch.client:transport:5.6.8. Looks like versions of ES instance & library are the same.
So, what does this warning mean & how should we deal with it?

Resources