So I am trying to use StreamBridge to dynamically send messages to different topics. I am successful in doing so if my output is a Message< String>, but not Message< GenericRecord>
Code example:
#StreamListener(Sink.INPUT)
public void process(#Payload GenericRecord messageValue,
#Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) GenericRecord messageKey,
#Header("Type") String type) {
log.info("Processing Event --> " + messageValue);
// Code...
// convert to Message<GenericRecord>
Message<GenericRecord> message = ...
streamBridge.send(type, message);
log.info("Processed Event --> " + messageValue);
}
The error I get is Caused by: org.springframework.messaging.converter.MessageConversionException: Could not write JSON: Not a map: which I am guessing is because streamBridge acceptedOutputTypes = application/json
2020-06-28 04:42:55.670 INFO 54347 --- [container-0-C-1] o.s.c.f.c.c.SimpleFunctionRegistry : Looking up function 'streamBridge' with acceptedOutputTypes: [application/json]
I tried modify accepted output type to be avro by setting the following in my properties, which did not work.
spring.cloud.stream.function.definition=streamBridge
spring.kafka.producer.key-serializer=io.confluent.kafka.serializers.KafkaAvroSerializer
spring.kafka.producer.value-serializer=io.confluent.kafka.serializers.KafkaAvroSerializer
spring.cloud.stream.bindings.streamBridge-out-0.content-type=application/*+avro
spring.cloud.stream.bindings.streamBridge-out-0.producer.use-native-encoding=true
Any ideas on how to configure StreamBridge to be avro?
edit: I also tried streamBridge.send(type, message, MimeType.valueOf("application/*+avro")) but that also had a conversion error.
I could not get StreamBridge to work dynamically so I switched to using Function:
#Bean
public Function<Message<GenericRecord>, Message<GenericRecord>> process() {
return message -> {
// Code...
String topic = message.getHeaders().get("type");
// convert to Message<GenericRecord>
Message<GenericRecord> message = MessageBuilder...
.setHeader("spring.cloud.stream.sendto.destination", topic)
.build();
return outgoingMessage;
};
}
Properties file is:
spring.cloud.function.definition=process
spring.cloud.stream.bindings.process-in-0.destination=${consumer_topic}
spring.cloud.stream.bindings.process-in-0.group=${spring.application.name}
spring.cloud.stream.bindings.process-out-0.content-type=application/*+avro
spring.cloud.stream.bindings.process-out-0.producer.use-native-encoding=true
Edit: Streambridge got fixed to support this: https://github.com/spring-cloud/spring-cloud-stream/issues/2007
You need to the the useNativeEncoding producer property to use a custom serializer.
Related
I have written a spring cloud stream application where producers are publishing messages to the designated kafka topics. My query is how can I add a producer callback to receive ack/confirmation that the message has been successfully published on the topic? Like how we do in spring kafka producer.send(record, new callback { ... }) (maintaining async producer). Below is my code:
private final Sinks.Many<Message<?>> responseProcessor = Sinks.many().multicast().onBackpressureBuffer();
#Bean
public Supplier<Flux<Message<?>>> event() {
return responseProcessor::asFlux;
}
public Message<?> publishEvent(String status) {
try {
String key = ...;
response = MessageBuilder.withPayload(payload)
.setHeader(KafkaHeaders.MESSAGE_KEY, key)
.build();
responseProcessor.tryEmitNext(response);
}
How can I make sure that tryEmitNext has successfully written to the topic?
Is implementing ProducerListener a solution and possible? Couldn't find a concrete solution/documentation in Spring Cloud Stream
UPDATE
I have implemented below now, seems to work as expected
#Component
public class MyProducerListener<K, V> implements ProducerListener<K, V> {
#Override
public void onSuccess(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata) {
// Do nothing on onSuccess
}
#Override
public void onError(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata, Exception exception) {
log.error("Producer exception occurred while publishing message : {}, exception : {}", producerRecord, exception);
}
}
#Bean
ProducerMessageHandlerCustomizer<KafkaProducerMessageHandler<?, ?>> customizer(MyProducerListener pl) {
return (handler, destinationName) -> handler.getKafkaTemplate().setProducerListener(pl);
}
See the Kafka Producer Properties.
recordMetadataChannel
The bean name of a MessageChannel to which successful send results should be sent; the bean must exist in the application context. The message sent to the channel is the sent message (after conversion, if any) with an additional header KafkaHeaders.RECORD_METADATA. The header contains a RecordMetadata object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)
Failed sends go the producer error channel (if configured); see Error Channels. Default: null
You can add a #ServiceActivator to consume from this channel asynchronously.
I am writing a demo program using Apache Camel. Out Camel route is being called from a Spring Boot scheduler and it will transfer file from the source directory C:\CamelDemo\inputFolder to the destination directory C:\CamelDemo\outputFolder
The Spring Boot scheduler is as under
#Component
public class Scheduler {
#Autowired
private ProducerTemplate producerTemplate;
#Scheduled(cron = "#{#getCronValue}")
public void scheduleJob() {
System.out.println("Scheduler executing");
String inputEndpoint = "file:C:\\CamelDemo\\inputFolder?noop=true&sendEmptyMessageWhenIdle=true";
String outputEndpoint = "file:C:\\CamelDemo\\outputFolder?autoCreate=false";
Map<String, Object> headerMap = new HashMap<String, Object>();
headerMap.put("inputEndpoint", inputEndpoint);
headerMap.put("outputEndpoint", outputEndpoint);
producerTemplate.sendBodyAndHeaders("direct:transferFile", null, headerMap);
System.out.println("Scheduler complete");
}
}
The Apache Camel route is as under
#Component
public class FileTransferRoute extends RouteBuilder {
#Override
public void configure() {
errorHandler(defaultErrorHandler()
.maximumRedeliveries(3)
.redeliverDelay(1000)
.retryAttemptedLogLevel(LoggingLevel.WARN));
from("direct:transferFile")
.log("Route reached")
.log("Input Endpoint: ${in.headers.inputEndpoint}")
.log("Output Endpoint: ${in.headers.outputEndpoint}")
.pollEnrich().simple("${in.headers.inputEndpoint}")
.recipientList(header("outputEndpoint"));
//.to("file:C:\\CamelDemo\\outputFolder?autoCreate=false")
}
}
When I am commenting out the line for recipientList() and uncommenting the to() i.e. givig static endpoint in to(), the flow is working. But when I am commenting to() and uncommenting recipientList(), it is not working. Please help how to route the message to the dynamic endpoint (outputEndpoint)?
You are using pollEnrich without specifying an AggregationStrategy: in this case, Camel will create a new OUT message from the retrieved resource, without combining it to the original IN message: this means you will lose the headers previously set on the IN message.
See documentation : https://camel.apache.org/manual/latest/enrich-eip.html#_a_little_enrich_example_using_java
strategyRef Refers to an AggregationStrategy to be used to merge the reply from the external service, into a single outgoing message. By default Camel will use the reply from the external service as outgoing message.
A simple solution would be to define a simple AggregationStrategy on your pollEnrich component, which simply copies headers from the IN message to the new OUT message (note that you will then use the original IN message body, but in your case it's not a problem I guess)
from("direct:transferFile")
.log("Route reached")
.log("Input Endpoint: ${in.headers.inputEndpoint}")
.log("Output Endpoint: ${in.headers.outputEndpoint}")
.pollEnrich().simple("${in.headers.inputEndpoint}")
.aggregationStrategy((oldExchange, newExchange) -> {
// Copy all headers from IN message to the new OUT Message
newExchange.getIn().getHeaders().putAll(oldExchange.getIn().getHeaders());
return newExchange;
})
.log("Output Endpoint (after pollEnrich): ${in.headers.outputEndpoint}")
.recipientList(header("outputEndpoint"));
//.to("file:C:\\var\\CamelDemo\\outputFolder?autoCreate=false");
I recently started looking into Spring Cloud Stream for Kafka, and have struggled to make the TestBinder work with Kstreams. Is this a known limitation, or have I just overlooked something?
This works fine:
String processor:
#StreamListener(TopicBinding.INPUT)
#SendTo(TopicBinding.OUTPUT)
public String process(String message) {
return message + " world";
}
String test:
#Test
#SuppressWarnings("unchecked")
public void testString() {
Message<String> message = new GenericMessage<>("Hello");
topicBinding.input().send(message);
Message<String> received = (Message<String>) messageCollector.forChannel(topicBinding.output()).poll();
assertThat(received.getPayload(), equalTo("Hello world"));
}
But when I try to use KStream in my process, I can't get the TestBinder to be working.
Kstream processor:
#SendTo(TopicBinding.OUTPUT)
public KStream<String, String> process(
#Input(TopicBinding.INPUT) KStream<String, String> events) {
return events.mapValues((value) -> value + " world");
}
KStream test:
#Test
#SuppressWarnings("unchecked")
public void testKstream() {
Message<String> message = MessageBuilder
.withPayload("Hello")
.setHeader(KafkaHeaders.TOPIC, "event.sirism.dev".getBytes())
.setHeader(KafkaHeaders.MESSAGE_KEY, "Test".getBytes())
.build();
topicBinding.input().send(message);
Message<String> received = (Message<String>)
messageCollector.forChannel(topicBinding.output()).poll();
assertThat(received.getPayload(), equalTo("Hello world"));
}
As you might have noticed, I omitted the #StreamListener from the Kstream processor, but without it it doesn't seem like the testbinder can find the handler. (but with it, it doesn't work when starting up the application)
Is this a known bug / limitation, or am I just doing something stupid here?
The test binder is only for MessageChannel-based binders (subclasses of AbstractMessageChannelBinder). The KStreamBinder does not use MessageChannels.
You can testing using the real binder and an embedded kafka broker, provided by the spring-kafka-test module.
Also see this issue.
I am trying to create a event based system for communicating between services using Apache Kafka as Messaging system and Spring Cloud Stream Kafka.
I have written my Receiver class methods as below,
#StreamListener(target = Sink.INPUT, condition = "headers['eventType']=='EmployeeCreatedEvent'")
public void handleEmployeeCreatedEvent(#Payload String payload) {
logger.info("Received EmployeeCreatedEvent: " + payload);
}
This method is specifically to catch for messages or events related to EmployeeCreatedEvent.
#StreamListener(target = Sink.INPUT, condition = "headers['eventType']=='EmployeeTransferredEvent'")
public void handleEmployeeTransferredEvent(#Payload String payload) {
logger.info("Received EmployeeTransferredEvent: " + payload);
}
This method is specifically to catch for messages or events related to EmployeeTransferredEvent.
#StreamListener(target = Sink.INPUT)
public void handleDefaultEvent(#Payload String payload) {
logger.info("Received payload: " + payload);
}
This is the default method.
When I run the application, I am not able to see the methods annoated with condition attribute being called. I only see the handleDefaultEvent method being called.
I am sending a message to this Receiver Application from the Sending/Source App using the below CustomMessageSource class as below,
#Component
#EnableBinding(Source.class)
public class CustomMessageSource {
#Autowired
private Source source;
public void sendMessage(String payload,String eventType) {
Message<String> myMessage = MessageBuilder.withPayload(payload)
.setHeader("eventType", eventType)
.build();
source.output().send(myMessage);
}
}
I am calling the method from my controller in Source App as below,
customMessageSource.sendMessage("Hello","EmployeeCreatedEvent");
The customMessageSource instance is autowired as below,
#Autowired
CustomMessageSource customMessageSource;
Basicaly, I would like to filter messages received by the Sink/Receiver application and handle them accordingly.
For this I have used the #StreamListener annotation with condition attribute to simulate the behaviour of handling different events.
I am using Spring Cloud Stream Chelsea.SR2 version.
Can someone help me in resolving this issue.
It seems like the headers are not propagated. Make sure you include the custom headers in spring.cloud.stream.kafka.binder.headers http://docs.spring.io/autorepo/docs/spring-cloud-stream-docs/Chelsea.SR2/reference/htmlsingle/#_kafka_binder_properties .
I am using Spring integration with this configuration:
#Bean MessageChannel errorChannel(){
return new PublishSubscribeChannel();
}
#MessagingGateway(name = "gatewayInbound",
defaultRequestChannel="farsRequestChannel", errorChannel="errorChannel"){
}
With this configuration, I am avoiding showing messages but I want to create a basic log such as LOGGER.error().
Additionally, I am working with SLFJ and logbak. Thus, the perfect scenario will be integrate this error message with similar configuration in my logback XML. For this reason:
Can I use logback to log Spring integration errorChannel LOGS?
Can I show the error sent to an errorChannel?
Can I personalize this error with this similar expression in logback? If I use, LoggingHandler, I see the complete stack trace and I want to customize this message.
[%-5level] - %d{dd/MM/YYYY HH:mm:ss} - [%file:%line] - %msg%n
#Bean
#ServiceActivator(inputChannel="myErrorChannel")
public MessageHandler myLogger() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
ErrorMessage em = (ErrorMessage) message;
String errorMessage = em.getPayload().getMessage();
// log it
throw (MessagingException) em.getPayload();
}
};
}
If you don't want the exception to be propagated, you can just consume it, but you need to set defaultReplyTimeout=0 on the gateway (and null will be returned).
or
#Bean
#ServiceActivator(inputChannel="myErrorChannel")
public MessageHandler loggingHandler() {
LoggingHandler loggingHandler = new LoggingHandler("ERROR");
loggingHandler.setExpression("payload.message");
return loggingHandler;
}
(the error will be consumed in this case).