I have a springboot application that is subscribed to a Google Cloud PubSub topic. The connection is correct and I am able to read messages and do the necessary processing when I receive a message to the topic.
The problem is when a special character arrives in said topic, such as ''á","é","í","ó","ú". That is, characters in ISO 8859-1 format. At that moment, in the traces I get this error:
Caused by: com.fasterxml.jackson.core.JsonParseException: Invalid UTF-8 middle byte 0x4e
This is my custom object mapper:
#Configuration
public class MyConfiguration {
#Bean
public MyCustomObjectMapper configure() {
final ObjectMapper mapper = new ObjectMapper();
mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
return new MyCustomObjectMapper(mapper);
}
}
This is a test message received on the topic:
{
"messages": [
{
"data": "{ \"myText\": \"áéíóú\"}"
}
]
}
How could I accept messages with these special characters?
Related
I have written a spring cloud stream application where producers are publishing messages to the designated kafka topics. My query is how can I add a producer callback to receive ack/confirmation that the message has been successfully published on the topic? Like how we do in spring kafka producer.send(record, new callback { ... }) (maintaining async producer). Below is my code:
private final Sinks.Many<Message<?>> responseProcessor = Sinks.many().multicast().onBackpressureBuffer();
#Bean
public Supplier<Flux<Message<?>>> event() {
return responseProcessor::asFlux;
}
public Message<?> publishEvent(String status) {
try {
String key = ...;
response = MessageBuilder.withPayload(payload)
.setHeader(KafkaHeaders.MESSAGE_KEY, key)
.build();
responseProcessor.tryEmitNext(response);
}
How can I make sure that tryEmitNext has successfully written to the topic?
Is implementing ProducerListener a solution and possible? Couldn't find a concrete solution/documentation in Spring Cloud Stream
UPDATE
I have implemented below now, seems to work as expected
#Component
public class MyProducerListener<K, V> implements ProducerListener<K, V> {
#Override
public void onSuccess(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata) {
// Do nothing on onSuccess
}
#Override
public void onError(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata, Exception exception) {
log.error("Producer exception occurred while publishing message : {}, exception : {}", producerRecord, exception);
}
}
#Bean
ProducerMessageHandlerCustomizer<KafkaProducerMessageHandler<?, ?>> customizer(MyProducerListener pl) {
return (handler, destinationName) -> handler.getKafkaTemplate().setProducerListener(pl);
}
See the Kafka Producer Properties.
recordMetadataChannel
The bean name of a MessageChannel to which successful send results should be sent; the bean must exist in the application context. The message sent to the channel is the sent message (after conversion, if any) with an additional header KafkaHeaders.RECORD_METADATA. The header contains a RecordMetadata object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)
Failed sends go the producer error channel (if configured); see Error Channels. Default: null
You can add a #ServiceActivator to consume from this channel asynchronously.
I am using Spring Boot 2.3.1 and want to publish records that could not be deserialized using the DeadLetterPublishingRecoverer.
Everything looks fine, except that the original payload isn't written to the DLT topic. Instead I see it Base64 encoded.
In a different posting I have read that this is caused by the JsonSerializer that is used in the Kafkatemplate, so I tried using a different template. But now I get an SerializationException:
org.apache.kafka.common.errors.SerializationException: Can't convert value of class [B to class org.apache.kafka.common.serialization.BytesSerializer specified in value.serializer
A similar exception occurs when using the StringSerializer.
My code looks like this:
#Autowired
private KafkaProperties kafkaProperties;
private ProducerFactory<String, String> pf() {
return new DefaultKafkaProducerFactory<>(kafkaProperties.buildProducerProperties());
}
private KafkaTemplate<String, String> stringTemplate() {
return new KafkaTemplate<>(pf(), Collections.singletonMap(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class));
}
#Bean
public SeekToCurrentErrorHandler errorHandler() {
SeekToCurrentErrorHandler eh = new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(stringTemplate()));
eh.setLogLevel(Level.WARN);
return eh;
}
Found it just 5 minutes later.
I had to use the ByteArraySerializer instead.
So I am trying to use StreamBridge to dynamically send messages to different topics. I am successful in doing so if my output is a Message< String>, but not Message< GenericRecord>
Code example:
#StreamListener(Sink.INPUT)
public void process(#Payload GenericRecord messageValue,
#Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) GenericRecord messageKey,
#Header("Type") String type) {
log.info("Processing Event --> " + messageValue);
// Code...
// convert to Message<GenericRecord>
Message<GenericRecord> message = ...
streamBridge.send(type, message);
log.info("Processed Event --> " + messageValue);
}
The error I get is Caused by: org.springframework.messaging.converter.MessageConversionException: Could not write JSON: Not a map: which I am guessing is because streamBridge acceptedOutputTypes = application/json
2020-06-28 04:42:55.670 INFO 54347 --- [container-0-C-1] o.s.c.f.c.c.SimpleFunctionRegistry : Looking up function 'streamBridge' with acceptedOutputTypes: [application/json]
I tried modify accepted output type to be avro by setting the following in my properties, which did not work.
spring.cloud.stream.function.definition=streamBridge
spring.kafka.producer.key-serializer=io.confluent.kafka.serializers.KafkaAvroSerializer
spring.kafka.producer.value-serializer=io.confluent.kafka.serializers.KafkaAvroSerializer
spring.cloud.stream.bindings.streamBridge-out-0.content-type=application/*+avro
spring.cloud.stream.bindings.streamBridge-out-0.producer.use-native-encoding=true
Any ideas on how to configure StreamBridge to be avro?
edit: I also tried streamBridge.send(type, message, MimeType.valueOf("application/*+avro")) but that also had a conversion error.
I could not get StreamBridge to work dynamically so I switched to using Function:
#Bean
public Function<Message<GenericRecord>, Message<GenericRecord>> process() {
return message -> {
// Code...
String topic = message.getHeaders().get("type");
// convert to Message<GenericRecord>
Message<GenericRecord> message = MessageBuilder...
.setHeader("spring.cloud.stream.sendto.destination", topic)
.build();
return outgoingMessage;
};
}
Properties file is:
spring.cloud.function.definition=process
spring.cloud.stream.bindings.process-in-0.destination=${consumer_topic}
spring.cloud.stream.bindings.process-in-0.group=${spring.application.name}
spring.cloud.stream.bindings.process-out-0.content-type=application/*+avro
spring.cloud.stream.bindings.process-out-0.producer.use-native-encoding=true
Edit: Streambridge got fixed to support this: https://github.com/spring-cloud/spring-cloud-stream/issues/2007
You need to the the useNativeEncoding producer property to use a custom serializer.
I am using SpringBoot with Spring AMQP and I want to use RPC pattern using synchronous sendAndReceive method in producer. My configuration assumes 1 exchange with 2 distinct bindings (1 for each operation on the same resource). I want to send 2 messages with 2 different routingKeys and receive response on distinct reply-to queues
Problem is, as far as I know, sendAndReceive will wait for reply on a queue with name ".replies" so both replies will be sent to products.replies queue (at least that is my understanding).
My publisher config:
#Bean
public DirectExchange productsExchange() {
return new DirectExchange("products");
}
#Bean
public OrderService orderService() {
return new MqOrderService();
}
#Bean
public RabbitTemplate rabbitTemplate(final ConnectionFactory connectionFactory) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
return rabbitTemplate;
}
#Bean
public Jackson2JsonMessageConverter producerJackson2MessageConverter() {
return new Jackson2JsonMessageConverter();
}
and the 2 senders:
...
final Message response = template.sendAndReceive(productsExchange.getName(), "products.get", message);
...
final Message response = template.sendAndReceive(productsExchange.getName(), "products.stock.update", message);
...
consumer config:
#Bean
public Queue getProductQueue() {
return new Queue("getProductBySku");
}
#Bean
public Queue updateStockQueue() {
return new Queue("updateProductStock");
}
#Bean
public DirectExchange exchange() {
return new DirectExchange("products");
}
#Bean
public Binding getProductBinding(DirectExchange exchange) {
return BindingBuilder.bind(getProductQueue())
.to(exchange)
.with("products.get");
}
#Bean
public Binding modifyStockBinding(DirectExchange exchange) {
return BindingBuilder.bind(updateStockQueue())
.to(exchange)
.with("products.stock.update");
}
and #RabbitListeners with following sigratures:
#RabbitListener(queues = "getProductBySku")
public Message getProduct(GetProductResource getProductResource) {...}
#RabbitListener(queues = "updateProductStock")
public Message updateStock(UpdateStockResource updateStockResource) {...}
I noticed that the second sender receives 2 responses, one of which is of invalid type (from first receiver). Is there any way in which I can make these connections distinct? Or is using separate exchange for each operation the only reasonable solution?
as far as I know, sendAndReceive will wait for reply on a queue with name ".replies"
Where did you get that idea?
Depending on which version you are using, either a temporary reply queue will be created for each request or RabbitMQ's "direct reply-to" mechanism is used, which again means each request is replied to on a dedicated pseudo queue called amq.rabbitmq.reply-to.
I don't see any way for one producer to get another's reply; even if you use an explicit reply container (which is generally not necessary any more), the template will correlate the replies to the requests.
Try enabling DEBUG logging to see if provides any hints.
I'm using Spring Cloud Stream and RabbitMq to exchange Messages between different microservices.
Thats my setup to publish a message.
public interface OutputChannels {
static final String OUTPUT_CHANNEL = "outputChannel";
#Output
MessageChannel outputChannel();
}
.
#EnableBinding(OutputChannels.class)
#Log4j
public class OutputProducer {
#Autowired
private OutputChannels outputChannels;
public void createMessage(MyContent myContent) {
Message<MyContent> message = MessageBuilder
.withPayload(myContent)
.build();
outputChannels.outputChannel().send(message);
log.info("Sent message: " + message.getHeaders().getId() + myContent);
}
}
And the setup to receive the message
public interface InputChannels {
String INPUT_CHANNEL = "inputChannel";
#Input
SubscribableChannel inputChannel();
}
.
#EnableBinding(InputChannels.class)
#Log
public class InputConsumer {
#StreamListener(InputChannels.INPUT_CHANNEL)
public void receive(Message<MyContent> message) {
MyContent myContent = message.getPayload();
log.info("Received message: " + message.getHeaders().getId() + ", " + myContent);
}
}
I am able to successfully exchange messages with this setup. I would expect, that the IDs of the sent message and the received message are equal. But they are always different UUIDs.
Is there a way that the message keeps the same ID all the way from the producer, through the RabbitMq, to the consumer?
Spring Messaging messages are immutable; they get a new ID each time they are mutated.
You can use a custom header or IntegrationMessageHeaderAccessor.CORRELATION_ID to convey a constant value; in most use cases, the correlation id header is set by the application to the ID header at the start of a message's journey.