How to set up Spring Kafka test using EmbeddedKafkaRule/ EmbeddedKafka to fix TopicExistsException Intermittent Error? - spring-boot

I have been having issues with testing my Kafka consumer and producer. The integration tests fail intermittently with TopicExistsException.
This is how my current test class - UserEventListenerTest looks like for one of the consumers:
#SpringBootTest(properties = ["application.kafka.user-event-topic=user-event-topic-UserEventListenerTest",
"application.kafka.bootstrap=localhost:2345"])
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class UserEventListenerTest {
private val logger: Logger = LoggerFactory.getLogger(javaClass)
#Value("\${application.kafka.user-event-topic}")
private lateinit var userEventTopic: String
#Autowired
private lateinit var kafkaConfigProperties: KafkaConfigProperties
private lateinit var embeddedKafka: EmbeddedKafkaRule
private lateinit var sender: KafkaSender<String, UserEvent>
private lateinit var receiver: KafkaReceiver<String, UserEvent>
#BeforeAll
fun setup() {
embeddedKafka = EmbeddedKafkaRule(1, false, userEventTopic)
embeddedKafka.kafkaPorts(kafkaConfigProperties.bootstrap.substringAfterLast(":").toInt())
embeddedKafka.before()
val producerProps: HashMap<String, Any> = hashMapOf(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG to kafkaConfigProperties.bootstrap,
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG to "org.apache.kafka.common.serialization.StringSerializer",
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG to "com.project.userservice.config.MockAvroSerializer"
)
val senderOptions = SenderOptions.create<String, UserEvent>(producerProps)
sender = KafkaSender.create(senderOptions)
val consumerProps: HashMap<String, Any> = hashMapOf(
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG to kafkaConfigProperties.bootstrap,
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG to "org.apache.kafka.common.serialization.StringDeserializer",
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG to kafkaConfigProperties.deserializer,
ConsumerConfig.AUTO_OFFSET_RESET_CONFIG to "earliest",
"schema.registry.url" to kafkaConfigProperties.schemaRegistry,
ConsumerConfig.GROUP_ID_CONFIG to "test-consumer"
)
val receiverOptions = ReceiverOptions.create<String, UserEvent>(consumerProps)
.subscription(Collections.singleton("some-topic-after-UserEvent"))
receiver = KafkaReceiver.create(receiverOptions)
}
}
// Some tests
// Not shown as they are irrelevant
...
...
...
The UserEventListener class consumes a message from user-event-topic-UserEventListenerTest and publishes a message to some-topic-after-UserEvent.
As you can see from the setup, I have a test producer that will publish a message to user-event-topic-UserEventListenerTest so that I can test whether UserEventListener consumes the message and a test consumer that will consume the message from the some-topic-after-UserEvent so that I can see if UserEventListener publishes a message to some-topic-after-UserEvent after processing the record.
The KafkaConfigProperties class is as follows.
#Component
#ConfigurationProperties(prefix = "application.kafka")
data class KafkaConfigProperties(
var bootstrap: String = "",
var schemaRegistry: String = "",
var deserializer: String = "",
var userEventTopic: String = "",
)
And the application.yml looks like this.
application:
kafka:
user-event-topic: "platform.user-events.v1"
bootstrap: "localhost:9092"
schema-registry: "http://localhost:8081"
deserializer: com.project.userservice.config.MockAvroDeserializer
Error logs
com.project.userservice.user.UserEventListenerTest > initializationError FAILED
kafka.common.KafkaException:
at org.springframework.kafka.test.EmbeddedKafkaBroker.createTopics(EmbeddedKafkaBroker.java:354)
at org.springframework.kafka.test.EmbeddedKafkaBroker.lambda$createKafkaTopics$4(EmbeddedKafkaBroker.java:341)
at org.springframework.kafka.test.EmbeddedKafkaBroker.doWithAdmin(EmbeddedKafkaBroker.java:368)
at org.springframework.kafka.test.EmbeddedKafkaBroker.createKafkaTopics(EmbeddedKafkaBroker.java:340)
at org.springframework.kafka.test.EmbeddedKafkaBroker.afterPropertiesSet(EmbeddedKafkaBroker.java:284)
at org.springframework.kafka.test.rule.EmbeddedKafkaRule.before(EmbeddedKafkaRule.java:114)
at com.project.userservice.user.UserEventListenerTest.setup(UserEventListenerTest.kt:62)
Caused by:
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TopicExistsException: Topic 'user-event-topic-UserEventListenerTest' already exists.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
at org.springframework.kafka.test.EmbeddedKafkaBroker.createTopics(EmbeddedKafkaBroker.java:351)
... 6 more
Caused by:
org.apache.kafka.common.errors.TopicExistsException: Topic 'user-event-topic-UserEventListenerTest' already exists.
What I have tried:
Use different bootstrap server address in each test by specifying the bootstrap configuration, e.g. #SpringBootTest(properties = ["application.kafka.bootstrap=localhost:2345"])
Use different topic names in each test by overwriting the topic configuration via #SpringBootTest just like the bootstrap server overwrite in the previous bullet point
Add #DirtiesContext to each test class
Package versions
Kotlin 1.3.61
Spring Boot - 2.2.3.RELEASE
io.projectreactor.kafka:reactor-kafka:1.2.2.RELEASE
org.springframework.kafka:spring-kafka-test:2.3.4.RELEASE (test implementation only)
Problem
I have multiple test classes that use EmbeddedKafkaRule and are set up more or less the same away. For each of them, I specify different kafka bootstrap server address and topic names, but I still see the TopicAlreadyExists exceptions intermittently.
What can I do to make my test classes consistent?

I specify different kafka bootstrap server address and topic names, but I still see the TopicAlreadyExists exceptions intermittently
That makes no sense; if they have a new port each time, and especially new topic names, it's impossible for the topic(s) to already exist.
Some suggestions:
Since you are using JUnit5, don't use the JUnit4 EmbeddedKafkaRule, use EmbeddedKafkaBroker instead; or simply add #EmbeddedKafka and the broker will be added as a bean to the Spring application context and its life cycle managed by Spring (use #DirtiesContext to destroy); for non-Spring tests, the broker will be created (and destroyed) by the JUnit5 EmbeddedKafkaCondition and is available via EmbeddedKafkaCondition.getBroker().
Don't use explicit ports; let the broker use its default random port and use embeddedKafka.getBrokersAsString() for the bootstrap servers property.
If you must manage the brokers yourself (in #BeforeAll), destroy() them in #AfterAll.

Related

java.lang.IllegalArgumentException: CamelContext must be specified on: Message[] - Camel core

Below is builder from camel core & writting junit testcases with it & camel core version used is 2.22.1.
new ExchangeBuilder(null)
.withBody(body)
.withHeader(header, headerValue)
.build();
Junit testcases is throwing error when calling above builder - java.lang.IllegalArgumentException: CamelContext must be specified on: Message[]
You'll have to provide constructor of ExchangeBuilder a instance of CamelContext. If your test class inherits from CamelTestBuilder you can use context() method to obtain it during a test.
public class SomeTest extends CamelTestSupport {
#Test
public void testSomething(){
String body = "some body";
String header = "SomeHeader";
String headerValue = "Some header value";
ExchangeBuilder builder = new ExchangeBuilder(context())
.withBody(body).withHeader(header, headerValue);
Exchange exchange = builder.build();
CamelContext contextFromExchange = exchange.getContext();
// Do something with the exchange?
}
}
If you're not using one of the camel-test modules you'll have to create and configure one manually using e.g new DefaultCamelContext();. For testing routes use of one of the camel-test modules is highly recommended though as they make things a lot easier.

Connection Timeout with testcontainers and redis

I do integration tests using Spring Boot, TestContainers, redis and Junit 5.
I am facing a weird behavior, when I all the integration tests, I keep having this log displaying :
Cannot reconnect to [localhost:55133]: Connection refused: localhost/127.0.0.1:55133
and this exception :
org.springframework.dao.QueryTimeoutException: Redis command timed out; nested exception is io.lettuce.core.RedisCommandTimeoutException: Command timed out after 1 minute(s)
at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:70)
But I run the tests individually, I dont have this behavior.
I use Junit5 and I am using Junit5 extension to start and stop my redis container :
public class RedisTestContainerExtension implements BeforeAllCallback, AfterAllCallback {
private GenericContainer<?> redis;
#Override
public void beforeAll(ExtensionContext extensionContext) throws Exception {
redis = new GenericContainer<>(DockerImageName.parse("redis:5.0.3-alpine"))
.withCommand("redis-server","--requirepass", "password")
.waitingFor(Wait.forListeningPort())
.withStartupTimeout(Duration.ofMinutes(2))
.withExposedPorts(6379);
redis.start();
System.setProperty("APP_REDIS_CONVERSATIONS_HOST",redis.getHost());
System.setProperty("APP_REDIS_CONVERSATIONS_PORT",redis.getFirstMappedPort().toString());
System.setProperty("APP_REDIS_CONVERSATIONS_PASSWORD","password");
System.setProperty("APP_REDIS_CONVERSATIONS_TTL","600m");
}
#Override
public void afterAll(ExtensionContext extensionContext) throws Exception {
if(redis != null){
redis.stop();
}
}
}
And I add this file as an extension to my integration test :
#ExtendWith({SpringExtension.class, RedisTestContainerExtension.class})
#SpringBootTest(classes = ConversationsApplication.class)
class MyIntegrationTest {
...
}
Can anyone help me fix this situation.
We had a similar issue. The issue was occured only when we execute all tests (or at least not only one specific)
We have another test setup - we are using a base class to manage test testcontainers - where the port-mapping of the containers was applied by overriding the properties via DynamicPropertySource
Our fix was to mark the base-test-class with #DirtiesContext that spring does not reuse the application-context over the tests-classes - see documentation of DynamicPropertySource:
NOTE: if you use #DynamicPropertySource in a base class and discover that tests in subclasses fail because the dynamic properties change between subclasses, you may need to annotate your base class with #DirtiesContext to ensure that each subclass gets its own ApplicationContext with the correct dynamic properties.
Example:
#Slf4j
#SpringBootTest
#DirtiesContext
#Testcontainers
public abstract class AbstractContainerTest {
#Container
private static final ElasticsearchContainer elasticsearchContainer = new DealElasticsearchContainer();
#Container
private static final RedisCacheContainer redisCacheContainer = new RedisCacheContainer();
#DynamicPropertySource
static void databaseProperties(DynamicPropertyRegistry registry) {
log.info("Override properties to connect to Testcontainers:");
log.info("* Test-Container 'Elastic': spring.elasticsearch.rest.uris = {}",
elasticsearchContainer.getHttpHostAddress());
log.info("* Test-Container 'Redis': spring.redis.host = {} ; spring.redis.port = {}",
redisCacheContainer.getHost(), redisCacheContainer.getMappedPort(6379));
registry.add("spring.elasticsearch.rest.uris", elasticsearchContainer::getHttpHostAddress);
registry.add("spring.redis.host", redisCacheContainer::getHost);
registry.add("spring.redis.port", () -> redisCacheContainer.getMappedPort(6379));
}
}
So maybe give it a try to use #DirtiesContext or switch to a setup which uses DynamicPropertySource to override the properties. It was especially build for this case:
Method-level annotation for integration tests that need to add properties with dynamic values to the Environment's set of PropertySources.
This annotation and its supporting infrastructure were originally designed to allow properties from Testcontainers based tests to be exposed easily to Spring integration tests. However, this feature may also be used with any form of external resource whose lifecycle is maintained outside the test's ApplicationContext.

How can I test logs of Spring Boot application?

I have an application that is a mix of Spring Boot, Jersey, and Camel applications. It starts as a Spring Boot app. I am writing integration tests, and I need to make asserts on logs?
For instance, I need to assert that the Camel route read a message from source A. How can I make reliable asserts on logs? Is there any industry standard for this?
NOTE: I tried finding any solution, but at the moment, I neither understand how to solve it nor can find ready solutions.
UPDATE 1: The detail that I underestimated, but it seems important. I use Kotlin, NOT Java. I tried applying answer, but it isn't one to one transferable to Kotlin.
UPDATE 2:
This is a conversion from Java to Kotlin. ListAppender doesn't have enough information to resolve the type in Kotlin.
class LoggerExtension : BeforeEachCallback, AfterEachCallback {
private val listAppender: ListAppender<ILoggingEvent> = ListAppender<ILoggingEvent>()
private val logger: Logger = LoggerFactory.getLogger(ROOT_LOGGER_NAME) as Logger
override fun afterEach(extensionContext: ExtensionContext) {
listAppender.stop()
listAppender.list.clear()
logger.detachAppender(listAppender)
}
override fun beforeEach(extensionContext: ExtensionContext) {
logger.addAppender(listAppender)
listAppender.start()
}
val messages: List<String>
get() = listAppender.list.stream().map { e -> e.getMessage() }.collect(Collectors.toList())
val formattedMessages: List<String>
get() = listAppender.list.stream().map { e -> e.getFormattedMessage() }.collect(Collectors.toList())
}
Kotlin: Not enough information to infer type variable A
Not an error, but I have a feeling that it will fail in runtime:
private val logger: Logger = LoggerFactory.getLogger(ROOT_LOGGER_NAME) as Logger
Spring Boot comes with OutputCapture rule for JUnit 4 and OutputCaptureExtension for JUnit 5, that let you assert on text sent to standard output.
public class MyTest {
#Rule
public OutputCaptureRule output = new OutputCaptureRule();
#Test
public void test() {
// test code
assertThat(output).contains("ok");
}
}

Loading test application.yml properties

I have a test properties file under src/test/resources/application.yml. But I cannot get the values to load in my unit test. I have the following class:
#ConfigurationProperties("snmp")
open class SnmpProperties {
var port: Int = 1611
lateinit var protocol: String
lateinit var host: String
override fun toString(): String {
return "SnmpProperties(port=$port, protocol='$protocol', host='$host')"
}
}
which in the production code, loads in the values from /src/main/resources/application.yml.
snmp:
port: 1161
protocol: udp
host: 0.0.0.0
Unit test class:
#CamelSpringBootTest
#SpringBootApplication
#EnableAutoConfiguration
open class SnmpRouteTest : CamelTestSupport() {
#Autowired
lateinit var snmpProperties: SnmpProperties
#Mock
lateinit var repository: IPduEventRepository
#InjectMocks
lateinit var snmpTrapRoute: SnmpTrapRoute
#Before
fun setup() {
initMocks(this)
}
I have tried to add a test profile to each application.yml files to see if that adding #ActiveProfiles("test") worked, but it didn't.
src/main/resources/application.yml &
src/test/resources/application.yml
# Test profile
spring:
profiles: test
snmp:
port: 1161
protocol: udp
host: 0.0.0.0
I've also created a TestConfiguration class which creates the SnmpProperties bean and autowire it into the test class using #EnableConfigurationProperties(TestConfiguration::class):
#Configuration
#EnableConfigurationProperties(SnmpProperties::class)
open class TestConfiguration {
#Bean
open fun snmpProperties() = SnmpProperties()
}
Again, no go. The error I get is:
Cannot instantiate #InjectMocks field named 'snmpTrapRoute' of type 'class org.meanwhile.in.hell.camel.snmp.receiver.route.SnmpRoute'.
You haven't provided the instance at field declaration so I tried to construct the instance.
However the constructor or the initialization block threw an exception : Parameter specified as non-null is null: method org.meanwhile.in.hell.camel.snmp.receiver.route.SnmpTrapRoute.<init>, parameter snmpProperties
Make sure to check your project structure. The properties file should be on the classpath in order for Spring Boot to find and use it.
For example the project structure as defined by Maven here: https://maven.apache.org/guides/introduction/introduction-to-the-standard-directory-layout.html
In case of Maven your configuration files should be put in these directories:
src/main/resources/application.yml
src/test/resources/application.yml
It looks like the bean is not created (hence the null error).
Try to either:
add #Configuration on top of your SnmpProperties configuration class
add #EnableConfigurationProperties(SnmpProperties.class) on top of your test class
Source: https://www.baeldung.com/configuration-properties-in-spring-boot
#CamelSpringBootTest
#SpringBootTest(classes = [SnmpTrapReceiverCamelApplication::class])
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
#DisableJmx(false)
#ExtendWith(MockitoExtension::class)
#EnableAutoConfiguration
class SnmpTrapRouteTest {
object TestSnmpConstants {
const val SNMP_REAL_ENDPOINT_ID = "snmp-trap-route"
const val SNMP_DIRECT_REPLACEMENT_ENDPOINT = "direct:snmp-from"
const val TRAP_REQUEST_ID = 123456789
const val TRAP_OID = "1.2.3.4.5"
const val TRAP_PAYLOAD = "snmp-trap-payload"
}
#MockBean
lateinit var repository: IPduEventRepository
#Produce
lateinit var producerTemplate: ProducerTemplate
#Autowired
lateinit var camelContext: CamelContext
#Test
#Throws(Exception::class)
fun `Should call save method on the repository when PDU TRAP event supplied`() {
// Replace our SNMP consumer route with a dummy route than can be called from a producer internally.
// Since our snmp endpoint is an asynchronous consumer (meaning it only receives data from external events)
// we need to use the "direct:" component to allow a producer to internally call what is ordinarily an external
// event-driven endpoint. Otherwise we will get a Connection Refused error, as we cannot access the external
// system/socket.
AdviceWithRouteBuilder.adviceWith(camelContext, TestSnmpConstants.SNMP_REAL_ENDPOINT_ID) { routeBuilder ->
routeBuilder.replaceFromWith(TestSnmpConstants.SNMP_DIRECT_REPLACEMENT_ENDPOINT)
}
// Create the PDU object to send to the SNMP endpoint
val trap = PDU()
trap.type = PDU.TRAP
trap.requestID = Integer32(TestSnmpConstants.TRAP_REQUEST_ID)
trap.add(VariableBinding(OID(TestSnmpConstants.TRAP_OID), OctetString(TestSnmpConstants.TRAP_PAYLOAD)))
// "direct:" endpoints only send DefaultMessage objects. These are not castable to SnmpMessage objects,
// so need to overwrite the exchange IN message to be an SnmpMessage object
val exchange = DefaultExchange(camelContext)
exchange.setIn(SnmpMessage(camelContext, trap))
// ProducerTemplates need a default endpoint specified.
// The ProducerTemplate provides us with a producer that can directly deliver messages to consumers defined
// in the camelContext, using the "direct:" component (see above)
producerTemplate.setDefaultEndpointUri(TestSnmpConstants.SNMP_DIRECT_REPLACEMENT_ENDPOINT)
producerTemplate.send(exchange)
// Verify that the repository.save() was invoked
verify(repository, atLeast(1)).save(any())
}
}

Spring Cloud Messaging Source is not sending messages to Kafka broker

I am following the 'Spring Microservices In Action' book, with some small deviations from the format chosen by the author. Namely, I am using Kotlin and Gradle rather than Java and Maven. Other than that, I am mostly following the code as presented.
In the chapter on Messaging I am running into a problem - I cannot publish a message using the Source class I am autowiring into my SimpleSourceBean.
I know the general setup is OK, as the Kafka topic is created, and on application startup I see the corresponding log messages. I've tried autowiring the source explicitly in the class body as well as in the constructor, but no success in either case
Application class
#SpringBootApplication
#EnableEurekaClient
#EnableBinding(Source::class)
#EnableCircuitBreaker
class OrganizationServiceApplication {
#Bean
#LoadBalanced
fun getRestTemplate(): RestTemplate {
val restTemplate = RestTemplate()
val interceptors = restTemplate.interceptors
interceptors.add(UserContextInterceptor())
restTemplate.interceptors = interceptors
return restTemplate
}
}
fun main(args: Array<String>) {
runApplication<OrganizationServiceApplication>(*args)
}
This is the SimpleSourceBean implementation:
#Component
class SimpleSourceBean {
#Autowired
lateinit var source: Source
val logger = LoggerFactory.getLogger(this.javaClass)
fun publishOrgChange(action: String, orgId: String) {
logger.debug("Sending Kafka message $action for Organization $orgId on source ${source}")
val change = OrganizationChangeModel(
OrganizationChangeModel::class.java.typeName,
action,
orgId,
UserContext.correlationId!!)
logger.debug("change message: $change")
source.output()
.send(MessageBuilder
.withPayload(change)
.build())
logger.debug("Sent Kafka message $action for Organization $orgId successfully")
}
}
and this is the Service class that uses the SimpleSourceBean to send the message to Kafka:
#Component
class OrganizationService {
#Autowired
lateinit var organizationRepository: OrganizationRepository
#Autowired
lateinit var simpleSourceBean: SimpleSourceBean
val logger = LoggerFactory.getLogger(OrganizationService::class.java)
// some omissions for brevity
#HystrixCommand(
fallbackMethod = "fallbackUpdate",
commandKey = "updateOrganizationCommandKey",
threadPoolKey = "updateOrganizationThreadPool")
fun updateOrganization(organizationId: String, organization: Organization): Organization {
val updatedOrg = organizationRepository.save(organization)
simpleSourceBean.publishOrgChange("UPDATE", organizationId)
return updatedOrg
}
private fun fallbackUpdate(organizationId: String, organization: Organization) =
Organization(id = "000-000-00", name = "update not saved", contactEmail = "", contactName = "", contactPhone = "")
#HystrixCommand
fun saveOrganization(organization: Organization): Organization {
val orgToSave = organization.copy(id = UUID.randomUUID().toString())
val savedOrg = organizationRepository.save(orgToSave)
simpleSourceBean.publishOrgChange("SAVE", savedOrg.id)
return savedOrg
}
}
The log messages
organizationservice_1 | 2019-08-23 23:15:33.939 DEBUG 18 --- [ionThreadPool-2] S.O.events.source.SimpleSourceBean : Sending Kafka message UPDATE for Organization e254f8c-c442-4ebe-a82a-e2fc1d1ff78a on source null
organizationservice_1 | 2019-08-23 23:15:33.940 DEBUG 18 --- [ionThreadPool-2] S.O.events.source.SimpleSourceBean : change message: OrganizationChangeModel(type=SpringMicroservicesInAction.OrganizationService.events.source.OrganizationChangeModel, action=UPDATE, organizationId=e254f8c-c442-4ebe-a82a-e2fc1d1ff78a, correlationId=c84d288f-bfd6-4217-9026-8a45eab058e1)
organizationservice_1 | 2019-08-23 23:15:33.941 DEBUG 18 --- [ionThreadPool-2] o.s.c.s.m.DirectWithAttributesChannel : preSend on channel 'output', message: GenericMessage [payload=OrganizationChangeModel(type=SpringMicroservicesInAction.OrganizationService.events.source.OrganizationChangeModel, action=UPDATE, organizationId=e254f8c-c442-4ebe-a82a-e2fc1d1ff78a, correlationId=c84d288f-bfd6-4217-9026-8a45eab058e1), headers={id=05799740-f8cf-85f8-54f8-74fce2679909, timestamp=1566602133941}]
organizationservice_1 | 2019-08-23 23:15:33.945 DEBUG 18 --- [ionThreadPool-2] tractMessageChannelBinder$SendingHandler : org.springframework.cloud.stream.binder.AbstractMessageChannelBinder$SendingHandler#38675bb5 received message: GenericMessage [payload=byte[224], headers={contentType=application/json, id=64e1e8f1-45f4-b5e6-91d7-c2df28b3d6cc, timestamp=1566602133943}]
organizationservice_1 | 2019-08-23 23:15:33.946 DEBUG 18 --- [ionThreadPool-2] nder$ProducerConfigurationMessageHandler : org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder$ProducerConfigurationMessageHandler#763a88a received message: GenericMessage [payload=byte[224], headers={contentType=application/json, id=7be5d188-5309-cba9-8297-74431c410152, timestamp=1566602133945}]
There are no further messages logged, which includes the final DEBUG log statement of the SimpleSourceBEan
Checking inside the Kafka container if there are any messages on the 'orgChangeTopic' topic, it comes up empty:
root#99442804288f:/opt/kafka_2.11-0.10.1.0/bin# ./kafka-console-consumer.sh --from-beginning --topic orgChangeTopic --bootstrap-server 0.0.0.0:9092
Processed a total of 0 messages
Any pointer to where my problem might lie is greatly appreciated
edit:
adding the application.yml:
spring:
cloud:
stream:
bindings:
output:
destination: orgChangeTopic
content-type: application/json
kafka:
binder:
zkNodes: "http://kafkaserver:2181"
brokers: "http://kafkaserver:9092"
// omitting some irrelevant config
logging:
level:
org.apache.kafka: DEBUG
org.springframework.cloud: DEBUG
org.springframework.web: WARN
springmicroservicesinaction.organizationservice: DEBUG
excerpt of the build.gradle file with relevant dependencies:
dependencies {
// kotlin, spring boot, etc
implementation("org.springframework.cloud:spring-cloud-stream:2.2.0.RELEASE")
implementation("org.springframework.cloud:spring-cloud-starter-stream-kafka:2.2.0.RELEASE")
}
You need to show your application properties as well. Your kafka version is very old; 0.10.x.x doesn't support headers. What version of spring-cloud-stream are you using? Modern versions require a Kafka that supports headers (0.11 or preferably later - the current release is 2.3), unless you set the headerMode to none.
That said, I would expect to see an error message if we try to send headers to a version that doesn't support them.
implementation("org.springframework.cloud:spring-cloud-stream:2.2.0.RELEASE")
Also note that with modern versions, you no longer need
zkNodes: "http://kafkaserver:2181"
The kafka-clients version used by 2.2.0 supports topic provisioning via the Kafka broker directly and we no longer need to connect to zookeeper.

Resources