Spring Cloud Messaging Source is not sending messages to Kafka broker - spring-boot

I am following the 'Spring Microservices In Action' book, with some small deviations from the format chosen by the author. Namely, I am using Kotlin and Gradle rather than Java and Maven. Other than that, I am mostly following the code as presented.
In the chapter on Messaging I am running into a problem - I cannot publish a message using the Source class I am autowiring into my SimpleSourceBean.
I know the general setup is OK, as the Kafka topic is created, and on application startup I see the corresponding log messages. I've tried autowiring the source explicitly in the class body as well as in the constructor, but no success in either case
Application class
#SpringBootApplication
#EnableEurekaClient
#EnableBinding(Source::class)
#EnableCircuitBreaker
class OrganizationServiceApplication {
#Bean
#LoadBalanced
fun getRestTemplate(): RestTemplate {
val restTemplate = RestTemplate()
val interceptors = restTemplate.interceptors
interceptors.add(UserContextInterceptor())
restTemplate.interceptors = interceptors
return restTemplate
}
}
fun main(args: Array<String>) {
runApplication<OrganizationServiceApplication>(*args)
}
This is the SimpleSourceBean implementation:
#Component
class SimpleSourceBean {
#Autowired
lateinit var source: Source
val logger = LoggerFactory.getLogger(this.javaClass)
fun publishOrgChange(action: String, orgId: String) {
logger.debug("Sending Kafka message $action for Organization $orgId on source ${source}")
val change = OrganizationChangeModel(
OrganizationChangeModel::class.java.typeName,
action,
orgId,
UserContext.correlationId!!)
logger.debug("change message: $change")
source.output()
.send(MessageBuilder
.withPayload(change)
.build())
logger.debug("Sent Kafka message $action for Organization $orgId successfully")
}
}
and this is the Service class that uses the SimpleSourceBean to send the message to Kafka:
#Component
class OrganizationService {
#Autowired
lateinit var organizationRepository: OrganizationRepository
#Autowired
lateinit var simpleSourceBean: SimpleSourceBean
val logger = LoggerFactory.getLogger(OrganizationService::class.java)
// some omissions for brevity
#HystrixCommand(
fallbackMethod = "fallbackUpdate",
commandKey = "updateOrganizationCommandKey",
threadPoolKey = "updateOrganizationThreadPool")
fun updateOrganization(organizationId: String, organization: Organization): Organization {
val updatedOrg = organizationRepository.save(organization)
simpleSourceBean.publishOrgChange("UPDATE", organizationId)
return updatedOrg
}
private fun fallbackUpdate(organizationId: String, organization: Organization) =
Organization(id = "000-000-00", name = "update not saved", contactEmail = "", contactName = "", contactPhone = "")
#HystrixCommand
fun saveOrganization(organization: Organization): Organization {
val orgToSave = organization.copy(id = UUID.randomUUID().toString())
val savedOrg = organizationRepository.save(orgToSave)
simpleSourceBean.publishOrgChange("SAVE", savedOrg.id)
return savedOrg
}
}
The log messages
organizationservice_1 | 2019-08-23 23:15:33.939 DEBUG 18 --- [ionThreadPool-2] S.O.events.source.SimpleSourceBean : Sending Kafka message UPDATE for Organization e254f8c-c442-4ebe-a82a-e2fc1d1ff78a on source null
organizationservice_1 | 2019-08-23 23:15:33.940 DEBUG 18 --- [ionThreadPool-2] S.O.events.source.SimpleSourceBean : change message: OrganizationChangeModel(type=SpringMicroservicesInAction.OrganizationService.events.source.OrganizationChangeModel, action=UPDATE, organizationId=e254f8c-c442-4ebe-a82a-e2fc1d1ff78a, correlationId=c84d288f-bfd6-4217-9026-8a45eab058e1)
organizationservice_1 | 2019-08-23 23:15:33.941 DEBUG 18 --- [ionThreadPool-2] o.s.c.s.m.DirectWithAttributesChannel : preSend on channel 'output', message: GenericMessage [payload=OrganizationChangeModel(type=SpringMicroservicesInAction.OrganizationService.events.source.OrganizationChangeModel, action=UPDATE, organizationId=e254f8c-c442-4ebe-a82a-e2fc1d1ff78a, correlationId=c84d288f-bfd6-4217-9026-8a45eab058e1), headers={id=05799740-f8cf-85f8-54f8-74fce2679909, timestamp=1566602133941}]
organizationservice_1 | 2019-08-23 23:15:33.945 DEBUG 18 --- [ionThreadPool-2] tractMessageChannelBinder$SendingHandler : org.springframework.cloud.stream.binder.AbstractMessageChannelBinder$SendingHandler#38675bb5 received message: GenericMessage [payload=byte[224], headers={contentType=application/json, id=64e1e8f1-45f4-b5e6-91d7-c2df28b3d6cc, timestamp=1566602133943}]
organizationservice_1 | 2019-08-23 23:15:33.946 DEBUG 18 --- [ionThreadPool-2] nder$ProducerConfigurationMessageHandler : org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder$ProducerConfigurationMessageHandler#763a88a received message: GenericMessage [payload=byte[224], headers={contentType=application/json, id=7be5d188-5309-cba9-8297-74431c410152, timestamp=1566602133945}]
There are no further messages logged, which includes the final DEBUG log statement of the SimpleSourceBEan
Checking inside the Kafka container if there are any messages on the 'orgChangeTopic' topic, it comes up empty:
root#99442804288f:/opt/kafka_2.11-0.10.1.0/bin# ./kafka-console-consumer.sh --from-beginning --topic orgChangeTopic --bootstrap-server 0.0.0.0:9092
Processed a total of 0 messages
Any pointer to where my problem might lie is greatly appreciated
edit:
adding the application.yml:
spring:
cloud:
stream:
bindings:
output:
destination: orgChangeTopic
content-type: application/json
kafka:
binder:
zkNodes: "http://kafkaserver:2181"
brokers: "http://kafkaserver:9092"
// omitting some irrelevant config
logging:
level:
org.apache.kafka: DEBUG
org.springframework.cloud: DEBUG
org.springframework.web: WARN
springmicroservicesinaction.organizationservice: DEBUG
excerpt of the build.gradle file with relevant dependencies:
dependencies {
// kotlin, spring boot, etc
implementation("org.springframework.cloud:spring-cloud-stream:2.2.0.RELEASE")
implementation("org.springframework.cloud:spring-cloud-starter-stream-kafka:2.2.0.RELEASE")
}

You need to show your application properties as well. Your kafka version is very old; 0.10.x.x doesn't support headers. What version of spring-cloud-stream are you using? Modern versions require a Kafka that supports headers (0.11 or preferably later - the current release is 2.3), unless you set the headerMode to none.
That said, I would expect to see an error message if we try to send headers to a version that doesn't support them.
implementation("org.springframework.cloud:spring-cloud-stream:2.2.0.RELEASE")
Also note that with modern versions, you no longer need
zkNodes: "http://kafkaserver:2181"
The kafka-clients version used by 2.2.0 supports topic provisioning via the Kafka broker directly and we no longer need to connect to zookeeper.

Related

How to set up Spring Kafka test using EmbeddedKafkaRule/ EmbeddedKafka to fix TopicExistsException Intermittent Error?

I have been having issues with testing my Kafka consumer and producer. The integration tests fail intermittently with TopicExistsException.
This is how my current test class - UserEventListenerTest looks like for one of the consumers:
#SpringBootTest(properties = ["application.kafka.user-event-topic=user-event-topic-UserEventListenerTest",
"application.kafka.bootstrap=localhost:2345"])
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class UserEventListenerTest {
private val logger: Logger = LoggerFactory.getLogger(javaClass)
#Value("\${application.kafka.user-event-topic}")
private lateinit var userEventTopic: String
#Autowired
private lateinit var kafkaConfigProperties: KafkaConfigProperties
private lateinit var embeddedKafka: EmbeddedKafkaRule
private lateinit var sender: KafkaSender<String, UserEvent>
private lateinit var receiver: KafkaReceiver<String, UserEvent>
#BeforeAll
fun setup() {
embeddedKafka = EmbeddedKafkaRule(1, false, userEventTopic)
embeddedKafka.kafkaPorts(kafkaConfigProperties.bootstrap.substringAfterLast(":").toInt())
embeddedKafka.before()
val producerProps: HashMap<String, Any> = hashMapOf(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG to kafkaConfigProperties.bootstrap,
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG to "org.apache.kafka.common.serialization.StringSerializer",
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG to "com.project.userservice.config.MockAvroSerializer"
)
val senderOptions = SenderOptions.create<String, UserEvent>(producerProps)
sender = KafkaSender.create(senderOptions)
val consumerProps: HashMap<String, Any> = hashMapOf(
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG to kafkaConfigProperties.bootstrap,
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG to "org.apache.kafka.common.serialization.StringDeserializer",
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG to kafkaConfigProperties.deserializer,
ConsumerConfig.AUTO_OFFSET_RESET_CONFIG to "earliest",
"schema.registry.url" to kafkaConfigProperties.schemaRegistry,
ConsumerConfig.GROUP_ID_CONFIG to "test-consumer"
)
val receiverOptions = ReceiverOptions.create<String, UserEvent>(consumerProps)
.subscription(Collections.singleton("some-topic-after-UserEvent"))
receiver = KafkaReceiver.create(receiverOptions)
}
}
// Some tests
// Not shown as they are irrelevant
...
...
...
The UserEventListener class consumes a message from user-event-topic-UserEventListenerTest and publishes a message to some-topic-after-UserEvent.
As you can see from the setup, I have a test producer that will publish a message to user-event-topic-UserEventListenerTest so that I can test whether UserEventListener consumes the message and a test consumer that will consume the message from the some-topic-after-UserEvent so that I can see if UserEventListener publishes a message to some-topic-after-UserEvent after processing the record.
The KafkaConfigProperties class is as follows.
#Component
#ConfigurationProperties(prefix = "application.kafka")
data class KafkaConfigProperties(
var bootstrap: String = "",
var schemaRegistry: String = "",
var deserializer: String = "",
var userEventTopic: String = "",
)
And the application.yml looks like this.
application:
kafka:
user-event-topic: "platform.user-events.v1"
bootstrap: "localhost:9092"
schema-registry: "http://localhost:8081"
deserializer: com.project.userservice.config.MockAvroDeserializer
Error logs
com.project.userservice.user.UserEventListenerTest > initializationError FAILED
kafka.common.KafkaException:
at org.springframework.kafka.test.EmbeddedKafkaBroker.createTopics(EmbeddedKafkaBroker.java:354)
at org.springframework.kafka.test.EmbeddedKafkaBroker.lambda$createKafkaTopics$4(EmbeddedKafkaBroker.java:341)
at org.springframework.kafka.test.EmbeddedKafkaBroker.doWithAdmin(EmbeddedKafkaBroker.java:368)
at org.springframework.kafka.test.EmbeddedKafkaBroker.createKafkaTopics(EmbeddedKafkaBroker.java:340)
at org.springframework.kafka.test.EmbeddedKafkaBroker.afterPropertiesSet(EmbeddedKafkaBroker.java:284)
at org.springframework.kafka.test.rule.EmbeddedKafkaRule.before(EmbeddedKafkaRule.java:114)
at com.project.userservice.user.UserEventListenerTest.setup(UserEventListenerTest.kt:62)
Caused by:
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TopicExistsException: Topic 'user-event-topic-UserEventListenerTest' already exists.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
at org.springframework.kafka.test.EmbeddedKafkaBroker.createTopics(EmbeddedKafkaBroker.java:351)
... 6 more
Caused by:
org.apache.kafka.common.errors.TopicExistsException: Topic 'user-event-topic-UserEventListenerTest' already exists.
What I have tried:
Use different bootstrap server address in each test by specifying the bootstrap configuration, e.g. #SpringBootTest(properties = ["application.kafka.bootstrap=localhost:2345"])
Use different topic names in each test by overwriting the topic configuration via #SpringBootTest just like the bootstrap server overwrite in the previous bullet point
Add #DirtiesContext to each test class
Package versions
Kotlin 1.3.61
Spring Boot - 2.2.3.RELEASE
io.projectreactor.kafka:reactor-kafka:1.2.2.RELEASE
org.springframework.kafka:spring-kafka-test:2.3.4.RELEASE (test implementation only)
Problem
I have multiple test classes that use EmbeddedKafkaRule and are set up more or less the same away. For each of them, I specify different kafka bootstrap server address and topic names, but I still see the TopicAlreadyExists exceptions intermittently.
What can I do to make my test classes consistent?
I specify different kafka bootstrap server address and topic names, but I still see the TopicAlreadyExists exceptions intermittently
That makes no sense; if they have a new port each time, and especially new topic names, it's impossible for the topic(s) to already exist.
Some suggestions:
Since you are using JUnit5, don't use the JUnit4 EmbeddedKafkaRule, use EmbeddedKafkaBroker instead; or simply add #EmbeddedKafka and the broker will be added as a bean to the Spring application context and its life cycle managed by Spring (use #DirtiesContext to destroy); for non-Spring tests, the broker will be created (and destroyed) by the JUnit5 EmbeddedKafkaCondition and is available via EmbeddedKafkaCondition.getBroker().
Don't use explicit ports; let the broker use its default random port and use embeddedKafka.getBrokersAsString() for the bootstrap servers property.
If you must manage the brokers yourself (in #BeforeAll), destroy() them in #AfterAll.

Spring Cloud Streams - Multiple dynamic destinations for sources and sinks

There was a change request on my system, which currently listens to multiple channels and send messages to multiple channels as well, but now the destination names will be in the database and change any time.
I'm having trouble believing I'm the first one to come across this, but I see limited information out there.
All I found is these 2...
Dynamic sink destination:
https://github.com/spring-cloud-stream-app-starters/router/tree/master/spring-cloud-starter-stream-sink-router, but how would that work to active listening to those channels the way it's done by #StreamListener?
Dynamic source destinations:
https://github.com/spring-cloud/spring-cloud-stream-samples/blob/master/source-samples/dynamic-destination-source/, which does this
#Bean
#ServiceActivator(inputChannel = "sourceChannel")
public ExpressionEvaluatingRouter router() {
ExpressionEvaluatingRouter router = new ExpressionEvaluatingRouter(new SpelExpressionParser().parseExpression("payload.id"));
router.setDefaultOutputChannelName("default-output");
router.setChannelResolver(resolver);
return router;
}
But what's that "payload.id"? And where are the destinations specified there??
Feel free to improve my answer, I hope it will help others.
Now the code (It worked in my debugger). This is an example, not production ready!
This is how to send a message to dynamic destination
import org.springframework.messaging.MessageChannel;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.binding.BinderAwareChannelResolver;
#Service
#EnableBinding
public class MessageSenderService {
#Autowired
private BinderAwareChannelResolver resolver;
#Transactional
public void sendMessage(final String topicName, final String payload) {
final MessageChannel messageChannel = resolver.resolveDestination(topicName);
messageChannel.send(new GenericMessage<String>(payload));
}
}
And configuration for Spring Cloud Stream.
spring:
cloud:
stream:
dynamicDestinations: output.topic.1,output.topic2,output.topic.3
I found here
https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/index.html#dynamicdestination
It will work in spring Cloud Stream version 2+. I use 2.1.2
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
<version>2.1.2.RELEASE</version>
</dependency>
This is how to consume a message from dynamic destination
https://stackoverflow.com/a/56148190/4587961
Configuration
spring:
cloud:
stream:
default:
consumer:
concurrency: 2
partitioned: true
bindings:
# inputs
input:
group: application_name_group
destination: topic-1,topic-2
content-type: application/json;charset=UTF-8
Java consumer.
#Component
#EnableBinding(Sink.class)
public class CommonConsumer {
private final static Logger logger = LoggerFactory.getLogger(CommonConsumer.class);
#StreamListener(target = Sink.INPUT)
public void consumeMessage(final Message<Object> message) {
logger.info("Received a message: \nmessage:\n{}", message.getPayload());
final String topic = message.getHeaders().get("kafka_receivedTopic");
// Here I define logic which handles messages depending on message headers and topic.
// In my case I have configuration which forwards these messages to webhooks, so I need to have mapping topic name -> webhook URI.
}
}

EmbeddedKafka AdminClient shuts down before Spring app starts for tests

I'm trying to write integration tests for a Spring Kafka app (Spring Boot 2.0.6, Spring Kafka 2.1.10) and am seeing lots of instance of INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x166e432ebec0001 type:create cxid:0x5e zxid:0x24 txntype:-1 reqpath:n/a Error Path:/brokers/topics/my-topic/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/my-topic/partitions and various flavors of the path (/brokers, /brokers/topics, etc.) that show in the logs before the Spring app starts. The AdminClient then shuts down and this message is logged:
DEBUG org.apache.kafka.common.network.Selector - [SocketServer brokerId=0] Connection with /127.0.0.1 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:124)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:235)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:196)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:547)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:483)
at org.apache.kafka.common.network.Selector.poll(Selector.java:412)
at kafka.network.Processor.poll(SocketServer.scala:575)
at kafka.network.Processor.run(SocketServer.scala:492)
at java.lang.Thread.run(Thread.java:748)
I'm using the #ClassRule startup option in the test like so:
#ClassRule
#Shared
private KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 'my-topic')
, autowiring a KafkaTemplate, and setting the Spring properties for the connection based on the embedded Kafka values:
def setupSpec() {
System.setProperty('spring.kafka.bootstrap-servers', embeddedKafka.getBrokersAsString());
System.setProperty('spring.cloud.stream.kafka.binder.zkNodes', embeddedKafka.getZookeeperConnectionString());
}
Once the Spring app starts, I can see again instance of the user-level KeeperException messages: o.a.z.server.PrepRequestProcessor : Got user-level KeeperException when processing sessionid:0x166e445836d0001 type:setData cxid:0x6b zxid:0x2b txntype:-1 reqpath:n/a Error Path:/config/topics/__consumer_offsets Error:KeeperErrorCode = NoNode for /config/topics/__consumer_offsets.
Any idea where I'm going wrong here? I can provide other setup information and log messages but just took an educated guess on what may be most helpful initially.
I'm not familiar with Spock, but what I know that #KafkaListener method is invoked on its own thread, therefore you can't just assert it in the then: block directly.
You need to ensure somehow a blocking wait in your test-case.
I tried with the BlockingVariable against the real service not mock and I see in logs your println(message). But that BlockingVariable still doesn't work for me somehow:
#DirtiesContext
#SpringBootTest(classes = [KafkaIntTestApplication.class])
#ActiveProfiles('test')
class CustomListenerSpec extends Specification {
#ClassRule
#Shared
public KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, false, 'my-topic')
#Autowired
private KafkaTemplate<String, String> template
#SpyBean
private SimpleService service
final def TOPIC_NAME = 'my-topic'
def setupSpec() {
System.setProperty('spring.kafka.bootstrapServers', embeddedKafka.getBrokersAsString());
}
def 'Sample test'() {
given:
def testMessagePayload = "Test message"
def message = MessageBuilder.withPayload(testMessagePayload).setHeader(KafkaHeaders.TOPIC, TOPIC_NAME).build()
def result = new BlockingVariable<Boolean>(5)
service.handleMessage(_) >> {
result.set(true)
}
when: 'We put a message on the topic'
template.send(message)
then: 'the service should be called'
result.get()
}
}
And logs are like this:
2018-11-05 13:38:51.089 INFO 8888 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [my-topic-0, my-topic-1]
Test message
BlockingVariable.get() timed out after 5,00 seconds
at spock.util.concurrent.BlockingVariable.get(BlockingVariable.java:113)
at com.example.CustomListenerSpec.Sample test(CustomListenerSpec.groovy:54)
2018-11-05 13:38:55.917 INFO 8888 --- [ main] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#11ebb1b6: startup date [Mon Nov 05 13:38:49 EST 2018]; root of context hierarchy
Also I had to add this dependency:
testImplementation "org.hamcrest:hamcrest-core"
UPDATE
OK. There real problem that MockConfig was not visible for the test context configuration and that #Import(MockConfig.class) does the trick. Where #Primary also gives us additional signal what bean to pick up for the injection in the test class.
#ArtemBilan's response set me on the right path so thanks to him for chiming in, and I was able to figure it out after looking into other BlockingVariable articles and examples. I used BlockingVariable in a mock's response instead of as a callback. When the mock's response is invoked, make it set the value to true, and the then block just does result.get() and the test passes.
#DirtiesContext
#ActiveProfiles('test')
#SpringBootTest
#Import(MockConfig.class)
class CustomListenerSpec extends TestSpecBase {
#ClassRule
#Shared
private KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, false, TOPIC_NAME)
#Autowired
private KafkaTemplate<String, String> template
#Autowired
private SimpleService service
final def TOPIC_NAME = 'my-topic'
def setupSpec() {
System.setProperty('spring.kafka.bootstrap-servers', embeddedKafka.getBrokersAsString());
}
def 'Sample test'() {
def testMessagePayload = "Test message"
def message = MessageBuilder.withPayload(testMessagePayload).setHeader(KafkaHeaders.TOPIC, TOPIC_NAME).build()
def result = new BlockingVariable<Boolean>(5)
service.handleMessage(_ as String) >> {
result.set(true)
}
when: 'We put a message on the topic'
template.send(message)
then: 'the service should be called'
result.get()
}
}

Spring Cloud Stream Kafka Channel Not Working in Spring Boot Application

I have been attempting to get an inbound SubscribableChannel and outbound MessageChannel working in my spring boot application.
I have successfully setup the kafka channel and tested it successfully.
Furthermore I have create a basic spring boot application that tests adding and receiving things from the channel.
The issue I am having is when I put the equivalent code in the application it belongs in, it appears that the messages never get sent or received. By debugging it's hard to ascertain what's going on but the only thing that looks different to me is the channel-name. In the working impl the channel name is like application.channel in the non working app its localhost:8080/channel.
I was wondering if there is some spring boot configuration blocking or altering the creation of the channels into a different channel source?
Anyone had any similar issues?
application.yml
spring:
datasource:
url: jdbc:h2:mem:dpemail;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
platform: h2
username: hello
password:
driverClassName: org.h2.Driver
jpa:
properties:
hibernate:
show_sql: true
use_sql_comments: true
format_sql: true
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
bindings:
email-in:
destination: email
contentType: application/json
email-out:
destination: email
contentType: application/json
Email
public class Email {
private long timestamp;
private String message;
public long getTimestamp() {
return timestamp;
}
public void setTimestamp(long timestamp) {
this.timestamp = timestamp;
}
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
}
Binding Config
#EnableBinding(EmailQueues.class)
public class EmailQueueConfiguration {
}
Interface
public interface EmailQueues {
String INPUT = "email-in";
String OUTPUT = "email-out";
#Input(INPUT)
SubscribableChannel inboundEmails();
#Output(OUTPUT)
MessageChannel outboundEmails();
}
Controller
#RestController
#RequestMapping("/queue")
public class EmailQueueController {
private EmailQueues emailQueues;
#Autowired
public EmailQueueController(EmailQueues emailQueues) {
this.emailQueues = emailQueues;
}
#RequestMapping(value = "sendEmail", method = POST)
#ResponseStatus(ACCEPTED)
public void sendToQueue() {
MessageChannel messageChannel = emailQueues.outboundEmails();
Email email = new Email();
email.setMessage("hello world: " + System.currentTimeMillis());
email.setTimestamp(System.currentTimeMillis());
messageChannel.send(MessageBuilder.withPayload(email).setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON).build());
}
#StreamListener(EmailQueues.INPUT)
public void handleEmail(#Payload Email email) {
System.out.println("received: " + email.getMessage());
}
}
I'm not sure if one of the inherited configuration projects using Spring-Cloud, Spring-Cloud-Sleuth might be preventing it from working, but even when I remove it still doesnt. But unlike my application that does work with the above code I never see the ConsumeConfig being configured, eg:
o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 100
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id = consumer-2
connections.max.idle.ms = 540000
enable.auto.commit = false
exclude.internal.topics = true
(This configuration is what I see in my basic Spring Boot application when running the above code and the code works writing and reading from the kafka channel)....
I assume there is some over spring boot configuration from one of the libraries I'm using creating a different type of channel I just cannot find what that configuration is.
What you posted contains a lot of unrelated configuration, so hard to determine if anything gets in the way. Also, when you say "..it appears that the messages never get sent or received.." are there any exceptions in the logs? Also, please state the version of Kafka you're using as well as Spring Cloud Stream.
Now, I did try to reproduce it based on your code (after cleaning up a bit to only leave relevant parts) and was able to successfully send/receive.
My Kafka version is 0.11 and Spring Cloud Stream 2.0.0.
Here is the relevant code:
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
bindings:
email-in:
destination: email
email-out:
destination: email
#SpringBootApplication
#EnableBinding(KafkaQuestionSoApplication.EmailQueues.class)
public class KafkaQuestionSoApplication {
public static void main(String[] args) {
SpringApplication.run(KafkaQuestionSoApplication.class, args);
}
#Bean
public ApplicationRunner runner(EmailQueues emailQueues) {
return new ApplicationRunner() {
#Override
public void run(ApplicationArguments args) throws Exception {
emailQueues.outboundEmails().send(new GenericMessage<String>("Hello"));
}
};
}
#StreamListener(EmailQueues.INPUT)
public void handleEmail(String payload) {
System.out.println("received: " + payload);
}
public interface EmailQueues {
String INPUT = "email-in";
String OUTPUT = "email-out";
#Input(INPUT)
SubscribableChannel inboundEmails();
#Output(OUTPUT)
MessageChannel outboundEmails();
}
}
Okay so after a lot of debugging... I discovered that something is creating a Test Support Binder (how don't know yet) so obviously this is used to not impact add messages to a real channel.
After adding
#SpringBootApplication(exclude = TestSupportBinderAutoConfiguration.class)
The kafka channel configurations have worked and messages are adding.. would be interesting to know what on earth is setting up this test support binder.. I'll find that sucker eventually.

Spring Kafka Producer not sending to Kafka 1.0.0 (Magic v1 does not support record headers)

I am using this docker-compose setup for setting up Kafka locally: https://github.com/wurstmeister/kafka-docker/
docker-compose up works fine, creating topics via shell works fine.
Now I try to connect to Kafka via spring-kafka:2.1.0.RELEASE
When starting up the Spring application it prints the correct version of Kafka:
o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.0.0
o.a.kafka.common.utils.AppInfoParser : Kafka commitId : aaa7af6d4a11b29d
I try to send a message like this
kafkaTemplate.send("test-topic", UUID.randomUUID().toString(), "test");
Sending on client side fails with
UnknownServerException: The server experienced an unexpected error when processing the request
In the server console I get the message Magic v1 does not support record headers
Error when handling request {replica_id=-1,max_wait_time=100,min_bytes=1,max_bytes=2147483647,topics=[{topic=test-topic,partitions=[{partition=0,fetch_offset=39,max_bytes=1048576}]}]} (kafka.server.KafkaApis)
java.lang.IllegalArgumentException: Magic v1 does not support record headers
Googling suggests a version conflict, but the version seem to fit (org.apache.kafka:kafka-clients:1.0.0 is in the classpath).
Any clues? Thanks!
Edit:
I narrowed down the source of the problem. Sending plain Strings works, but sending Json via JsonSerializer results in the given problem. Here is the content of my producer config:
#Value("\${kafka.bootstrap-servers}")
lateinit var bootstrapServers: String
#Bean
fun producerConfigs(): Map<String, Any> =
HashMap<String, Any>().apply {
// list of host:port pairs used for establishing the initial connections to the Kakfa cluster
put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers)
put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer::class.java)
put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer::class.java)
}
#Bean
fun producerFactory(): ProducerFactory<String, MyClass> =
DefaultKafkaProducerFactory(producerConfigs())
#Bean
fun kafkaTemplate(): KafkaTemplate<String, MyClass> =
KafkaTemplate(producerFactory())
I had a similar issue. Kafka adds headers by default if we use JsonSerializer or JsonSerde for values.
In order to prevent this issue, we need to disable adding info headers.
if you are fine with default json serialization, then use the following (key point here is ADD_TYPE_INFO_HEADERS):
Map<String, Object> props = new HashMap<>(defaultSettings);
props.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
ProducerFactory<String, Object> producerFactory = new DefaultKafkaProducerFactory<>(props);
but if you need a custom JsonSerializer with specific ObjectMapper (like with PropertyNamingStrategy.SNAKE_CASE), you should disable adding info headers explicitly on JsonSerializer, as spring kafka ignores DefaultKafkaProducerFactory's property ADD_TYPE_INFO_HEADERS (as for me it's a bad design of spring kafka)
JsonSerializer<Object> valueSerializer = new JsonSerializer<>(customObjectMapper);
valueSerializer.setAddTypeInfo(false);
ProducerFactory<String, Object> producerFactory = new DefaultKafkaProducerFactory<>(props, Serdes.String().serializer(), valueSerializer);
or if we use JsonSerde, then:
Map<String, Object> jsonSerdeProperties = new HashMap<>();
jsonSerdeProperties.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
JsonSerde<T> jsonSerde = new JsonSerde<>(serdeClass);
jsonSerde.configure(jsonSerdeProperties, false);
Solved. The problem is neither the broker, some docker cache nor the Spring app.
The problem was a console consumer which I used in parallel for debugging. This was an "old" consumer started with kafka-console-consumer.sh --topic=topic --zookeeper=...
It actually prints a warning when started: Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
A "new" consumer with --bootstrap-server option should be used (especially when using Kafka 1.0 with JsonSerializer).
Note: Using an old consumer here can indeed affect the producer.
I just ran a test against that docker image with no problems...
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f093b3f2475c kafkadocker_kafka "start-kafka.sh" 33 minutes ago Up 2 minutes 0.0.0.0:32768->9092/tcp kafkadocker_kafka_1
319365849e48 wurstmeister/zookeeper "/bin/sh -c '/usr/sb…" 33 minutes ago Up 2 minutes 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp kafkadocker_zookeeper_1
.
#SpringBootApplication
public class So47953901Application {
public static void main(String[] args) {
SpringApplication.run(So47953901Application.class, args);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<Object, Object> template) {
return args -> template.send("foo", "bar", "baz");
}
#KafkaListener(id = "foo", topics = "foo")
public void listen(String in) {
System.out.println(in);
}
}
.
spring.kafka.bootstrap-servers=192.168.177.135:32768
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=false
.
2017-12-23 13:27:27.990 INFO 21305 --- [ foo-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [foo-0]
baz
EDIT
Still works for me...
spring.kafka.bootstrap-servers=192.168.177.135:32768
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.JsonDeserializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
.
2017-12-23 15:27:59.997 INFO 44079 --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
...
value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
...
2017-12-23 15:28:00.071 INFO 44079 --- [ foo-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [foo-0]
baz
you are using kafka version <=0.10.x.x
once you using using this, you must set JsonSerializer.ADD_TYPE_INFO_HEADERS to false as below.
Map<String, Object> props = new HashMap<>(defaultSettings);
props.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
ProducerFactory<String, Object> producerFactory = new DefaultKafkaProducerFactory<>(props);
for your producer factory properties.
In case you are using kafka version > 0.10.x.x, it should just work fine

Resources