Trying our a sample project using Spring Cloud Stream + Kafka Stream but the Messages published to the input topic/queue are not consumed by the Processor method (KStream as argument).
Binding Definition
public interface WordCountChannelBindings {
// Channel to PUBLISH and FETCH 'words'
String _wordsOutput = "words_output_channel";
String _wordsInput = "words_input_channel";
// Channel to PUBLISH and FETCH 'words-count' details
String _countOutput = "counts_output_channel";
String _countInput = "counts_input_channel";
// Source
#Output(_wordsOutput)
MessageChannel _wordsOutput();
// Sink
#Input(_wordsInput)
KStream<String, PageViewEvent> _wordsInput();
// Source
#Output(_countOutput)
KStream<String, Long> _countOutput();
// Sink
#Input(_countInput)
KTable<String, Long> _countInput();
}
Producer
#Scheduled(fixedDelay = 1000)
public void wordsProducer() {
List<String> names = Arrays.asList("mfisher", "dyser", "schacko", "abilan", "ozhurakousky", "grussell");
List<String> pages = Arrays.asList("blog", "sitemap", "initializr", "news", "colophon", "about");
String rPage = pages.get(new Random().nextInt(pages.size()));
String rName = pages.get(new Random().nextInt(names.size()));
PageViewEvent pageViewEvent = new PageViewEvent(rName, rPage, Math.random() > .5 ? 10 : 1000);
// Publish the words into the OUTPUT Topic
this.wordCountChannelBindings._wordsOutput().send(
MessageBuilder.withPayload(pageViewEvent)
.build());
log.info("Words published - {}", pageViewEvent);
}
Processor
#Component
public class WordsStreamProcessor {
#StreamListener
#SendTo(WordCountChannelBindings._countOutput)
public KStream<String, Long> process(#Input(WordCountChannelBindings._wordsInput) KStream<String, PageViewEvent> input) {
log.info("Process data - {}", input);
return input.filter((key, value) -> value.getDuration() > 10)
.map((key, value) -> new KeyValue<>(value.getPage(), "0"))
.groupByKey()
.count(Materialized.as("wordscount"))
.toStream();
}
}
Consumer
#StreamListener
public void wordsCountConsumer(#Input(WordCountChannelBindings._countInput) KTable<String, Long> wordsCountDetails) {
log.info("Consumed Result - {}", wordsCountDetails);
}
SB Main class
#EnableScheduling
#EnableBinding(WordCountChannelBindings.class)
#SpringBootApplication
public class SpringCloudStreamKafkaApplication {
public static void main(String[] args) {
SpringApplication.run(SpringCloudStreamKafkaApplication.class, args);
}
}
application.yml
spring.cloud.stream.kafka.binder:
brokers:
- localhost:9092
spring.cloud.stream.kafka.streams.binder:
applicationId: word-count-sample
configuration:
commit.interval.ms: 100
default.key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
default.value.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.bindings.words_output_channel:
destination: words_topic
producer:
headerMode: none
spring.cloud.stream.bindings.words_input_channel:
destination: words_topic
consumer:
headerMode: none
spring.cloud.stream.bindings.counts_output_channel:
destination: counts_topic
producer:
useNativeEncoding: true
spring.cloud.stream.bindings.counts_input_channel:
destination: counts_topic
consumer:
useNativeDecoding: true
headerMode: none
group: wordscount
contentType: application/json
spring.cloud.stream.kafka.streams.bindings.counts_output_channel:
producer:
keySerde: org.apache.kafka.common.serialization.Serdes$StringSerde
valueSerde: org.apache.kafka.common.serialization.Serdes$LongSerde
spring.cloud.stream.kafka.streams.bindings.counts_input_channel:
consumer:
keySerde: org.apache.kafka.common.serialization.Serdes$StringSerde
valueSerde: org.apache.kafka.common.serialization.Serdes$LongSerde
Logs
2020-07-17 12:31:45.893 INFO 17236 --- [ask-scheduler-4] l.k.s.s.c.stream.producer.WordsProducer : Words published - PageViewEvent(userId=about, page=colophon, duration=1000)
2020-07-17 12:31:46.895 INFO 17236 --- [ask-scheduler-8] l.k.s.s.c.stream.producer.WordsProducer : Words published - PageViewEvent(userId=initializr, page=blog, duration=10)
2020-07-17 12:31:47.899 INFO 17236 --- [ask-scheduler-3] l.k.s.s.c.stream.producer.WordsProducer : Words published - PageViewEvent(userId=blog, page=news, duration=1000)
2020-07-17 12:31:48.900 INFO 17236 --- [ask-scheduler-9] l.k.s.s.c.stream.producer.WordsProducer : Words published - PageViewEvent(userId=sitemap, page=about, duration=10)
As shown in the above logs the PageViewEvent is published to the topic every second but the Processor method which supposed to transform the event is not consuming the messages. No error seen in the log.
Kindly help to get this working.
Initially tried with
<spring-cloud.version>Hoxton</spring-cloud.version>
<version>2.3.1.RELEASE</version>
and also with
<spring-cloud.version>Finchley.RELEASE</spring-cloud.version>
<version>2.0.1.RELEASE</version>
But facing the same issue.
Related
I have written the following code to leverage the cloud stream functional approach to get the events from the RabbitMQ and publish those to KAFKA, I am able to achieve the primary goal with caveat while running the application if the KAFKA broker goes down due to any reason then I am getting the logs of KAFKA BROKER it's down but at the same time I want to stop the event from rabbitMQ or until the broker comes up those messages either should be routed to Exchange or DLQ topic. however, I have seen at many places to use producer sync: true but in my case that is either not helping, a lot of people talked about #ServiceActivator(inputChannel = "error-topic") for error topic while having a failure at target channel, this method is also not getting executed. so in short I don't want to lose my messages received from rabbitMQ during kafka is down due to any reason
application.yml
management:
health:
binders:
enabled: true
kafka:
enabled: true
server:
port: 8081
spring:
rabbitmq:
publisher-confirms : true
kafka:
bootstrap-servers: localhost:9092
producer:
properties:
max.block.ms: 100
admin:
fail-fast: true
cloud:
function:
definition: handle
stream:
bindingRetryInterval : 30
rabbit:
bindings:
handle-in-0:
consumer:
bindingRoutingKey: MyRoutingKey
exchangeType: topic
requeueRejected : true
acknowledgeMode: AUTO
# ackMode: MANUAL
# acknowledge-mode: MANUAL
# republishToDlq : false
kafka:
binder:
considerDownWhenAnyPartitionHasNoLeader: true
producer:
properties:
max.block.ms : 100
brokers:
- localhost
bindings:
handle-in-0:
destination: test_queue
binder: rabbit
group: queue
handle-out-0:
destination: mytopic
producer:
sync: true
errorChannelEnabled: true
binder: kafka
binders:
error:
destination: myerror
rabbit:
type: rabbit
environment:
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
virtual-host: rahul_host
kafka:
type: kafka
json:
cuttoff:
size:
limit: 1000
CloudStreamConfig.java
#Configuration
public class CloudStreamConfig {
private static final Logger log = LoggerFactory.getLogger(CloudStreamConfig.class);
#Autowired
ChunkService chunkService;
#Bean
public Function<Message<RmaValues>,Collection<Message<RmaValues>>> handle() {
return rmaValue -> {
log.info("processor runs : message received with request id : {}", rmaValue.getPayload().getRequestId());
ArrayList<Message<RmaValues>> msgList = new ArrayList<Message<RmaValues>>();
try {
List<RmaValues> dividedJson = chunkService.getDividedJson(rmaValue.getPayload());
for(RmaValues rmaValues : dividedJson) {
msgList.add(MessageBuilder.withPayload(rmaValues).build());
}
} catch (Exception e) {
e.printStackTrace();
}
Channel channel = rmaValue.getHeaders().get(AmqpHeaders.CHANNEL, Channel.class);
Long deliveryTag = rmaValue.getHeaders().get(AmqpHeaders.DELIVERY_TAG, Long.class);
// try {
// channel.basicAck(deliveryTag, false);
// } catch (IOException e) {
// e.printStackTrace();
// }
return msgList;
};
};
#ServiceActivator(inputChannel = "error-topic")
public void errorHandler(ErrorMessage em) {
log.info("---------------------------------------got error message over errorChannel: {}", em);
if (null != em.getPayload() && em.getPayload() instanceof KafkaSendFailureException) {
KafkaSendFailureException kafkaSendFailureException = (KafkaSendFailureException) em.getPayload();
if (kafkaSendFailureException.getRecord() != null && kafkaSendFailureException.getRecord().value() != null
&& kafkaSendFailureException.getRecord().value() instanceof byte[]) {
log.warn("error channel message. Payload {}", new String((byte[])(kafkaSendFailureException.getRecord().value())));
}
}
}
KafkaProducerConfiguration.java
#Configuration
public class KafkaProducerConfiguration {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Bean
public ProducerFactory<String, Object> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapAddress);
configProps.put(
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
configProps.put(
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate(producerFactory());
}
RmModelOutputIngestionApplication.java
#SpringBootApplication(scanBasePackages = "com.abb.rm")
public class RmModelOutputIngestionApplication {
private static final Logger LOGGER = LogManager.getLogger(RmModelOutputIngestionApplication.class);
public static void main(String[] args) {
SpringApplication.run(RmModelOutputIngestionApplication.class, args);
}
#Bean("objectMapper")
public ObjectMapper objectMapper() {
ObjectMapper mapper = new ObjectMapper();
LOGGER.info("Returning object mapper...");
return mapper;
}
First, it seems like you are creating too much unnecessary code. Why do you have ObjectMapper? Why do you have KafkaTemplate? Why do you have ProducerFactory? These are all already provided for you.
You really only have to have one function and possibly an error handler - depending on error handling strategy you select, which brings me to the error handling topic. There are 3 primary ways of handling errors. Here is the link to the doc explaining them all and providing samples. Please read thru that and modify your app accordingly and if something doesn't work or unclear feel free to follow up.
Any exception occur in processing should got DLQ, currently this not happening.
I am getting org.apache.kafka.common.errors.SerializationException: Error serializing Avro message
it is not going to DLQ. I am using spring cloud stream kstream binder. Allow topics are created at startup of app.
My application.yml
spring:
application:
name: demo-stream
cloud:
stream:
function:
definition: rawProcessor
bindings:
rawProcessor-in-0:
destination: raw
consumer:
enableDlq: true
dlqName: dlq
rawProcessor-out-0:
destination: fx
rawProcessor-out-1:
destination: cp
rawProcessor-out-2:
destination: cl
kafka:
streams:
bindings:
rawProcessor-in-0:
consumer:
enableDlq: true
dlqName: dlq
valueSerde: org.apache.kafka.common.serialization.Serdes$StringSerde
rawProcessor-out-0:
producer:
keySerde: org.apache.kafka.common.serialization.Serdes$StringSerde
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
rawProcessor-out-1:
producer:
keySerde: org.apache.kafka.common.serialization.Serdes$StringSerde
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
rawProcessor-out-2:
producer:
keySerde: org.apache.kafka.common.serialization.Serdes$StringSerde
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
binder:
deserializationExceptionHandler: sendToDlq
configuration:
schema.registry.url: http://localhost:8081
specific.avro.reader: true
My processing Class
public class RawKafkaMessageStream {
private final Classifier classifier;
private final static String STREAM_SPLIT_BRANCH_PREFIX = "split-";
#Bean
public Function<KStream<String, String>, KStream<String, SpecificRecordBase>[]> rawProcessor() {
return rawKStream -> {
final Map<String, KStream<String, RecordHolder<SpecificRecordBase>>> recordKStreamMap = rawKStream
.map(this::convertIntoKeyValueRecord)
.peek((key, value) -> log.info("Key: {}, value: {}", key, value))
.filter((key, value) -> value != null)
.split(Named.as(STREAM_SPLIT_BRANCH_PREFIX))
.branch((key, value) -> value.getType() == RecordType.CP, Branched.as(RecordType.CP.name()))
.branch((key, value) -> value.getType() == RecordType.CL, Branched.as(RecordType.CL.name()))
.noDefaultBranch();
KStream<String, SpecificRecordBase> validatedCPStream = getValidatedRecordStream(recordKStreamMap, RecordType.CP);
KStream<String, SpecificRecordBase> validatedCLStream = getValidatedRecordStream(recordKStreamMap, RecordType.CL);
return new KStream[]{validatedCPStream, validatedCLStream};
};
}
private KStream<String, SpecificRecordBase> getValidatedRecordStream
(Map<String, KStream<String, RecordHolder<SpecificRecordBase>>> recordKStreamMap,
RecordType recordType) {
return recordKStreamMap.get(STREAM_SPLIT_BRANCH_PREFIX + recordType.name());
}
private KeyValue<String, RecordHolder<SpecificRecordBase>> convertIntoKeyValueRecord(final String key,
final String value) {
log.debug("Raw msg received with key: {} and payload: {}", key, value); // key will be null here
final KeyValue<String, RecordHolder<SpecificRecordBase>> processing = classifier.classify(value);
log.info("Processing msg with key: {} and payload: {}", processing.key, processing.value);
return processing;
}
I'm trying to consume confluent avro message from kafka topic as Kstream with spring boot 2.0.
I was able to consume the message as MessageChannel but not as KStream.
#Input(ORGANIZATION)
KStream<String, Organization> organizationMessageChannel();
#StreamListener
public void processOrganization(#Input(KstreamBinding.ORGANIZATION)KStream<String, Organization> organization) {
log.info("Organization Received:" + organization);
}
Exception:
Exception in thread
"pcs-7bb7b444-044d-41bb-945d-450c902337ff-StreamThread-3"
org.apache.kafka.streams.errors.StreamsException: stream-thread
[pcs-7bb7b444-044d-41bb-945d-450c902337ff-StreamThread-3] Failed to
rebalance. at
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:860)
at
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:808)
at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)
Caused by: org.apache.kafka.streams.errors.StreamsException: Failed to
configure value serde class
io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde at
org.apache.kafka.streams.StreamsConfig.defaultValueSerde(StreamsConfig.java:859)
at
org.apache.kafka.streams.processor.internals.AbstractProcessorContext.(AbstractProcessorContext.java:59)
at
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.(ProcessorContextImpl.java:42)
at
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:134)
at
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:404)
at
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:365)
at
org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks(StreamThread.java:350)
at
org.apache.kafka.streams.processor.internals.TaskManager.addStreamTasks(TaskManager.java:137)
at
org.apache.kafka.streams.processor.internals.TaskManager.createTasks(TaskManager.java:88)
at
org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:259)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:264)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:367)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:316)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:295)
at
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1146)
at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1111)
at
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:851)
... 3 more Caused by: io.confluent.common.config.ConfigException:
Missing required configuration "schema.registry.url" which has no
default value. at
io.confluent.common.config.ConfigDef.parse(ConfigDef.java:243) at
io.confluent.common.config.AbstractConfig.(AbstractConfig.java:78)
at
io.confluent.kafka.serializers.AbstractKafkaAvroSerDeConfig.(AbstractKafkaAvroSerDeConfig.java:61)
at
io.confluent.kafka.serializers.KafkaAvroSerializerConfig.(KafkaAvroSerializerConfig.java:32)
at
io.confluent.kafka.serializers.KafkaAvroSerializer.configure(KafkaAvroSerializer.java:48)
at
io.confluent.kafka.streams.serdes.avro.SpecificAvroSerializer.configure(SpecificAvroSerializer.java:58)
at
io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde.configure(SpecificAvroSerde.java:107)
at
org.apache.kafka.streams.StreamsConfig.defaultValueSerde(StreamsConfig.java:855)
... 19 more
Based on the error I think I'm missing to configure the schema.registry.url for confluent.
I had a quick look at the sample here
Kind of bit lost on how to do the same with spring cloud stream using the streamListener
Does this need to be a separate configuration? or Is there a way to configure schema.registry.url in application.yml itself that confluent is looking for?
here is the code repo https://github.com/naveenpop/springboot-kstream-confluent
Organization.avsc
{
"namespace":"com.test.demo.avro",
"type":"record",
"name":"Organization",
"fields":[
{
"name":"orgId",
"type":"string",
"default":"null"
},
{
"name":"orgName",
"type":"string",
"default":"null"
},
{
"name":"orgType",
"type":"string",
"default":"null"
},
{
"name":"parentOrgId",
"type":"string",
"default":"null"
}
]
}
DemokstreamApplication.java
#SpringBootApplication
#EnableSchemaRegistryClient
#Slf4j
public class DemokstreamApplication {
public static void main(String[] args) {
SpringApplication.run(DemokstreamApplication.class, args);
}
#Component
public static class organizationProducer implements ApplicationRunner {
#Autowired
private KafkaProducer kafkaProducer;
#Override
public void run(ApplicationArguments args) throws Exception {
log.info("Starting: Run method");
List<String> names = Arrays.asList("blue", "red", "green", "black", "white");
List<String> pages = Arrays.asList("whiskey", "wine", "rum", "jin", "beer");
Runnable runnable = () -> {
String rPage = pages.get(new Random().nextInt(pages.size()));
String rName = names.get(new Random().nextInt(names.size()));
try {
this.kafkaProducer.produceOrganization(rPage, rName, "PARENT", "111");
} catch (Exception e) {
log.info("Exception :" +e);
}
};
Executors.newScheduledThreadPool(1).scheduleAtFixedRate(runnable ,1 ,1, TimeUnit.SECONDS);
}
}
}
KafkaConfig.java
#Configuration
public class KafkaConfig {
#Value("${spring.cloud.stream.schemaRegistryClient.endpoint}")
private String endpoint;
#Bean
public SchemaRegistryClient confluentSchemaRegistryClient() {
ConfluentSchemaRegistryClient client = new ConfluentSchemaRegistryClient();
client.setEndpoint(endpoint);
return client;
}
}
KafkaConsumer.java
#Slf4j
#EnableBinding(KstreamBinding.class)
public class KafkaConsumer {
#StreamListener
public void processOrganization(#Input(KstreamBinding.ORGANIZATION_INPUT) KStream<String, Organization> organization) {
organization.foreach((s, organization1) -> log.info("KStream Organization Received:" + organization1));
}
}
KafkaProducer.java
#EnableBinding(KstreamBinding.class)
public class KafkaProducer {
#Autowired
private KstreamBinding kstreamBinding;
public void produceOrganization(String orgId, String orgName, String orgType, String parentOrgId) {
try {
Organization organization = Organization.newBuilder()
.setOrgId(orgId)
.setOrgName(orgName)
.setOrgType(orgType)
.setParentOrgId(parentOrgId)
.build();
kstreamBinding.organizationOutputMessageChannel()
.send(MessageBuilder.withPayload(organization)
.setHeader(KafkaHeaders.MESSAGE_KEY, orgName)
.build());
} catch (Exception e){
log.error("Failed to produce Organization Message:" +e);
}
}
}
KstreamBinding.java
public interface KstreamBinding {
String ORGANIZATION_INPUT= "organizationInput";
String ORGANIZATION_OUTPUT= "organizationOutput";
#Input(ORGANIZATION_INPUT)
KStream<String, Organization> organizationInputMessageChannel();
#Output(ORGANIZATION_OUTPUT)
MessageChannel organizationOutputMessageChannel();
}
Update 1:
I applied the suggestion from dturanski here and the error vanished. However still not able to consume the message as KStream<String, Organization> no error in the console.
Update 2:
Applied the suggestion from sobychacko here and the message is consumable with empty values in the object.
I've made a commit to the GitHub sample to produce the message from spring boot itself and still getting it as empty values.
Thanks for your time on this issue.
The following implementation will not do what you are intending:
#StreamListener
public void processOrganization(#Input(KstreamBinding.ORGANIZATION)KStream<String, Organization> organization) {
log.info("Organization Received:" + organization);
}
That log statement is only invoked once at the bootstrap phase. In order for this to work, you need to invoke some operations on the received KStream and then provide the logic there. For e.g. following works where I am providing a lambda expression on the foreach method call.
#StreamListener
public void processOrganization(#Input(KstreamBinding.ORGANIZATION) KStream<String, Organization> organization) {
organization.foreach((s, organization1) -> log.info("Organization Received:" + organization1));
}
You also have an issue in the configuration where you are wrongly assigning avro Serde for keys where it is actually a String. Change it like this:
default:
key:
serde: org.apache.kafka.common.serialization.Serdes$StringSerde
value:
serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
With these changes, I get the logging statement each time I send something to the topic. However, there is a problem in your sending groovy script, I am not getting any actual data from your Organization domain, but I will let you figure that out.
Update on the issue with the empty Organization domain object
This happens because you have a mixed mode of serialization strategies going on. You are using Spring Cloud Stream's avro message converters on the producer side but on the Kafka Streams processor, using the Confluent avro Serdes. I just tried with the Confluent's serializers all the way from producers to processor and I was able to see the Organization domain on the outbound. Here is the modified configuration to make the serialization consistent.
spring:
application:
name: kstream
cloud:
stream:
schemaRegistryClient:
endpoint: http://localhost:8081
schema:
avro:
schema-locations: classpath:avro/Organization.avsc
bindings:
organizationInput:
destination: organization-updates
group: demokstream.org
consumer:
useNativeDecoding: true
organizationOutput:
destination: organization-updates
producer:
useNativeEncoding: true
kafka:
bindings:
organizationOutput:
producer:
configuration:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
schema.registry.url: http://localhost:8081
streams:
binder:
brokers: localhost
configuration:
schema.registry.url: http://localhost:8081
commit:
interval:
ms: 1000
default:
key:
serde: org.apache.kafka.common.serialization.Serdes$StringSerde
value:
serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
You can also remove the KafkaConfig class as wells as the EnableSchemaRegistryClient annotation from the main application class.
Try spring.cloud.stream.kafka.streams.binder.configuration.schema.registry.url: ...
Am working on a SpringBootApplication using dependency - spring-cloud-stream-binder-kafka-streams and trying to test sending an error message to Dlq when serdeError occurs.
#Slf4j
#Component
#EnableBinding(KafkaBinding.class)
public class AListener {
#StreamListener
public void sink(#Input(KafkaBinding.ABINDING) KStream<String, AnOrder> events) {
log.info("HERE_BEFORE");
events.foreach((k, v) -> {
log.info("HERE_AFTER value: {}", v.toString());
throw new RuntimeException("Failed, should land in dlq topic");
});
}
}
public interface KafkaBinding {
String ABINDING = "some.events";
#Input(ABINDING)
public KStream<String, AnOrder> incomingOrder();
}
application.yml
spring:
application:
name: aprocessor
cloud:
stream:
kafka:
streams:
binder:
brokers: localhost:9092
serdeError: sendToDlq
configuration:
commit.interval.ms: 1000
default:
key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
value.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
bindings:
input:
consumer:
enableDlq: true
dlqName: a-dlq
autoCommitOnError: true
autoCommitOffset: true
bindings:
input:
group: a-group
destination: some.events
pos:
destination: some.events
consumer.header-mode: raw
Tests:
#Slf4j
#DirtiesContext
#SpringBootTest
#EmbeddedKafka(
partitions = 1,
topics = {"some.events"},
controlledShutdown = true,
brokerProperties = {
"listeners=PLAINTEXT://localhost:9092",
"port=9092",
"auto.create.topics.enable=${topics.autoCreate:false}",
"delete.topic.enable=${topic.delete:true}"
})
public class AListenerTest {
private KafkaTemplate<String, String> kafkaTemplate;
#Autowired private EmbeddedKafkaBroker embeddedKafka;
#SpyBean private AListener listener;
private static final String INPUT_TOPIC = "some.events";
#BeforeEach
public void setUp() {
Map<String, Object> senderProperties =
KafkaTestUtils.senderProps(embeddedKafka.getBrokersAsString());
ProducerFactory<String, String> producerFactory =
new DefaultKafkaProducerFactory<>(senderProperties);
kafkaTemplate = new KafkaTemplate<>(producerFactory);
kafkaTemplate.setDefaultTopic(INPUT_TOPIC);
}
#Test
public void whenExceptionInConsumer_thenLogToDLQ(){
String logme = "{\"body\":\"thor\"}";
kafkaTemplate.sendDefault(logme);
log.info("<<<<DATA>>>> {}", logme);
}
}
Test fails with the following stack trace:
Caused by: org.springframework.context.ApplicationContextException: Failed to start bean 'inputBindingLifecycle'; nested exception is java.lang.IllegalArgumentException: DLQ support is not available for anonymous subscriptions
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:185)
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:53)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:360)
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:158)
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:122)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:893)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:552)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:775)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:316)
at org.springframework.boot.test.context.SpringBootContextLoader.loadContext(SpringBootContextLoader.java:127)
at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContextInternal(DefaultCacheAwareContextLoaderDelegate.java:99)
at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:117)
... 54 more
Caused by: java.lang.IllegalArgumentException: DLQ support is not available for anonymous subscriptions
at org.springframework.util.Assert.isTrue(Assert.java:118)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.doProvisionConsumerDestination(KafkaTopicProvisioner.java:186)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionConsumerDestination(KafkaTopicProvisioner.java:161)
at org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderUtils.prepareConsumerBinding(KafkaStreamsBinderUtils.java:53)
at org.springframework.cloud.stream.binder.kafka.streams.KStreamBinder.doBindConsumer(KStreamBinder.java:93)
at org.springframework.cloud.stream.binder.kafka.streams.KStreamBinder.doBindConsumer(KStreamBinder.java:51)
at org.springframework.cloud.stream.binder.AbstractBinder.bindConsumer(AbstractBinder.java:142)
at org.springframework.cloud.stream.binding.BindingService.doBindConsumer(BindingService.java:144)
at org.springframework.cloud.stream.binding.BindingService.bindConsumer(BindingService.java:112)
at org.springframework.cloud.stream.binding.BindableProxyFactory.createAndBindInputs(BindableProxyFactory.java:254)
at org.springframework.cloud.stream.binding.InputBindingLifecycle.doStartWithBindable(InputBindingLifecycle.java:58)
at java.base/java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608)
at org.springframework.cloud.stream.binding.AbstractBindingLifecycle.start(AbstractBindingLifecycle.java:48)
at org.springframework.cloud.stream.binding.InputBindingLifecycle.start(InputBindingLifecycle.java:34)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:182)
... 66 more
I expect the test to succeed, console log to show that a dlq/topic is created and query the dlq to print the message. What is causing the KafkaTopicProvisioner to throw "IllegalArgumentException: DLQ support is not available for anonymous subscriptions"?
I have already tried steps mentioned in the post here - "Correctly manage DLQ in Spring Cloud Stream Kafka".
Anonymous consumers are not allowed to use DLQ; you need a persistent subscription for that.
Anonymous consumers are those that do not have a consumer group specified.
From the answer you referenced.
bindings:
input:
group: so51247113
Also, this is open source, you could have looked at the source code of the KafkaTopicProvisioner...
boolean anonymous = !StringUtils.hasText(group);
Assert.isTrue(!anonymous || !properties.getExtension().isEnableDlq(),
"DLQ support is not available for anonymous subscriptions");
I want to to make an interactive query to my kafka stream topic.
At the moment i can send avro serialized json objects to my topic and read them again with avro deserializer.
I use for this scenario the normal MessageChannel Binder, this works as intended.
Now i want to use the kafka stream binder and i cant get it to work. Maybe someone can help me out there.
My Configuration:
spring:
cloud:
bus:
enabled: true
stream:
schemaRegistryClient.endpoint: http://192.168.99.100:8081
bindings:
segments-in:
destination: segments
contentType: application/vnd.segments-value.v1+avro
segments-all:
destination: segments
group: segments-all
consumer:
headerMode: raw
useNativeDecoding: true
kafka:
binder:
zkNodes: 192.168.99.100:2181
brokers: 192.168.99.100:32768
streams:
bindings:
segments-all:
consumer:
keySerde: org.apache.kafka.common.serialization.Serdes$StringSerde
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
Kafka Config Class:
#Configuration
public class KafkaConfiguration {
#Bean
public MessageConverter classificationMessageConverter() {
AvroSchemaMessageConverter converter = new AvroSchemaMessageConverter();
converter.setSchema(Segment.SCHEMA$);
return converter;
}
}
Schema Config
#Configuration
public class SchemaRegistryConfiguration {
#Bean
public SchemaRegistryClient schemaRegistryClient(#Value("${spring.cloud.stream.schemaRegistryClient.endpoint}") final String endpoint) {
ConfluentSchemaRegistryClient client = new ConfluentSchemaRegistryClient();
client.setEndpoint(endpoint);
return client;
}
}
And now my Interface
public interface Channels {
String EVENTS = "segments-in";
String ALLSEGMENTS = "segments-all";
#Input(Channels.EVENTS)
SubscribableChannel events();
#Input(Channels.ALLSEGMENTS)
KTable<?, ?> segmentsIn();
}
I always get following error(Warn Message), but only when i have the second channel open called segmentsIn().
org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-3] Connection to node -1 could not be established. Broker may not be available.
With the SubscribableChannel (segments-in) everything works fine, what am i doing wrong here? How can i get the channel segments-all to work with with the kafka stream api?
I got the connection working with following configuration:
spring:
cloud:
bus:
enabled: true
stream:
schemaRegistryClient.endpoint: http://192.168.99.100:8081
bindings:
segments-in:
destination: segments
contentType: application/vnd.segments-value.v1+avro
segments-all:
destination: segments
group: segments-all
consumer:
useNativeDecoding: false
events-out:
destination: incidents
group: events-out
producer:
useNativeDecoding: false
kafka:
binder:
zkNodes: 192.168.99.100:2181
brokers: 192.168.99.100:32768
streams:
binder:
zkNodes: 192.168.99.100:2181
brokers: 192.168.99.100:32768
configuration:
schema.registry.url: http://192.168.99.100:8081
default.key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
default.value.serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
See the added config for kafka streams, but i cannot query anything with my code.
I use following snippet:
#StreamListener(Channels.ALLSEGMENTS)
#SendTo(Channels.EVENTS_OUT)
public KStream<Utf8, Long> process(KStream<String, Segment> input) {
log.info("Read new information");
return input
.filter((key, segment) -> segment.getStart().time > 10)
.map((key, value) -> new KeyValue<>(value.id, value))
.groupByKey()
.count(Materialized.as(STORE_NAME))
.toStream();
}
And this scheduler:
#Scheduled(fixedRate = 30000, initialDelay = 5000)
public void printProductCounts() {
if (keyValueStore == null) {
keyValueStore = queryService.getQueryableStoreType(STORE_NAME, QueryableStoreTypes.keyValueStore());
}
String id = "21523XDEf";
System.out.println(keyValueStore.approximateNumEntries());
System.out.println("Product ID: " + id + " Count: " + keyValueStore.get(id));
}
Output is always:
0
Product ID: 21523XDEf Count: null
Can someone point me in the right direction? What am i doing wrong?