Publishing Json Deserialisation errors using DeadLetterPublishingRecoverer doesn't publish the original payload - spring

I am using Spring Boot 2.3.1 and want to publish records that could not be deserialized using the DeadLetterPublishingRecoverer.
Everything looks fine, except that the original payload isn't written to the DLT topic. Instead I see it Base64 encoded.
In a different posting I have read that this is caused by the JsonSerializer that is used in the Kafkatemplate, so I tried using a different template. But now I get an SerializationException:
org.apache.kafka.common.errors.SerializationException: Can't convert value of class [B to class org.apache.kafka.common.serialization.BytesSerializer specified in value.serializer
A similar exception occurs when using the StringSerializer.
My code looks like this:
#Autowired
private KafkaProperties kafkaProperties;
private ProducerFactory<String, String> pf() {
return new DefaultKafkaProducerFactory<>(kafkaProperties.buildProducerProperties());
}
private KafkaTemplate<String, String> stringTemplate() {
return new KafkaTemplate<>(pf(), Collections.singletonMap(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class));
}
#Bean
public SeekToCurrentErrorHandler errorHandler() {
SeekToCurrentErrorHandler eh = new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(stringTemplate()));
eh.setLogLevel(Level.WARN);
return eh;
}

Found it just 5 minutes later.
I had to use the ByteArraySerializer instead.

Related

ReadOnlyKeyValueStore from a KTable using spring-kafka

I am migrating a Kafka Streams implementation which uses pure Kafka apis to use spring-kafka instead as it's incorporated in a spring-boot application.
Everything works fine the Stream, GlobalKTable, branching that I have all works perfectly fine but I am having a hard time incorporating a ReadOnlyKeyValueStore. Based on the spring-kafka documentation here: https://docs.spring.io/spring-kafka/docs/2.6.10/reference/html/#streams-spring
It says:
If you need to perform some KafkaStreams operations directly, you can
access that internal KafkaStreams instance by using
StreamsBuilderFactoryBean.getKafkaStreams(). You can autowire
StreamsBuilderFactoryBean bean by type, but you should be sure to use
the full type in the bean definition.
Based on that I tried to incorporate it to my example as in the following fragments below:
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
public KafkaStreamsConfiguration defaultKafkaStreamsConfig() {
Map<String, Object> props = defaultStreamsConfigs();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "quote-stream");
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, SpecificAvroSerde.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "stock-quotes-stream-group");
return new KafkaStreamsConfiguration(props);
}
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_BUILDER_BEAN_NAME)
public StreamsBuilderFactoryBean defaultKafkaStreamsBuilder(KafkaStreamsConfiguration defaultKafkaStreamsConfig) {
return new StreamsBuilderFactoryBean(defaultKafkaStreamsConfig);
}
...
final GlobalKTable<String, LeveragePrice> leverageBySymbolGKTable = streamsBuilder
.globalTable(KafkaConfiguration.LEVERAGE_PRICE_TOPIC,
Materialized.<String, LeveragePrice, KeyValueStore<Bytes, byte[]>>as("leverage-by-symbol-table")
.withKeySerde(Serdes.String())
.withValueSerde(leveragePriceSerde));
leveragePriceView = myKStreamsBuilder.getKafkaStreams().store("leverage-by-symbol-table", QueryableStoreTypes.keyValueStore());
But adding the StreamsBuilderFactoryBean(which seems to be needed to get a reference to KafkaStreams) definition causes an error:
The bean 'defaultKafkaStreamsBuilder', defined in class path resource [com/resona/springkafkastream/repository/KafkaConfiguration.class], could not be registered. A bean with that name has already been defined in class path resource [org/springframework/kafka/annotation/KafkaStreamsDefaultConfiguration.class] and overriding is disabled.
The issue is I don't want to control the lifecycle of the stream that's what I get with the plain Kafka APIs so I would like to get a reference to the default managed one as I want spring to manage it but whenever I try to expose the bean it gives the error. Any ideas on what's the correct approach to that using spring-kafka?
P.S - I am not interested in solutions using spring-cloud-stream I am looking for implementations of spring-kafka.
You don't need to define any new beans; something like this should work...
spring.application.name=quote-stream
spring.kafka.streams.properties.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.kafka.streams.properties.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
#SpringBootApplication
#EnableKafkaStreams
public class So69669791Application {
public static void main(String[] args) {
SpringApplication.run(So69669791Application.class, args);
}
#Bean
GlobalKTable<String, String> leverageBySymbolGKTable(StreamsBuilder sb) {
return sb.globalTable("gkTopic",
Materialized.<String, String, KeyValueStore<Bytes, byte[]>> as("leverage-by-symbol-table"));
}
private ReadOnlyKeyValueStore<String, String> leveragePriceView;
#Bean
StreamsBuilderFactoryBean.Listener afterStart(StreamsBuilderFactoryBean sbfb,
GlobalKTable<String, String> leverageBySymbolGKTable) {
StreamsBuilderFactoryBean.Listener listener = new StreamsBuilderFactoryBean.Listener() {
#Override
public void streamsAdded(String id, KafkaStreams streams) {
leveragePriceView = streams.store("leverage-by-symbol-table", QueryableStoreTypes.keyValueStore());
}
};
sbfb.addListener(listener);
return listener;
}
#Bean
KStream<String, String> stream(StreamsBuilder builder) {
KStream<String, String> stream = builder.stream("someTopic");
stream.to("otherTopic");
return stream;
}
}

Performing Aggregation of records and launch spring cloud task in single Processor in Spring cloud stream

I am trying to perform the following actions
Aggregating messages
Launching Spring Cloud Task
But not able to pass the aggregated message to the method launching Task. Below is the piece of code
#Autowired
private TaskProcessorProperties processorProperties;
#Autowired
Processor processor;
#Autowired
private AppConfiguration appConfiguration ;
#Transformer(inputChannel = MyProcessor.intermidiate, outputChannel = Processor.OUTPUT)
public Object setupRequest(String message) {
Map<String, String> properties = new HashMap<>();
if (StringUtils.hasText(this.processorProperties.getDataSourceUrl())) {
properties.put("spring_datasource_url", this.processorProperties.getDataSourceUrl());
}
if (StringUtils.hasText(this.processorProperties.getDataSourceDriverClassName())) {
properties.put("spring_datasource_driverClassName", this.processorProperties
.getDataSourceDriverClassName());
}
if (StringUtils.hasText(this.processorProperties.getDataSourceUserName())) {
properties.put("spring_datasource_username", this.processorProperties
.getDataSourceUserName());
}
if (StringUtils.hasText(this.processorProperties.getDataSourcePassword())) {
properties.put("spring_datasource_password", this.processorProperties
.getDataSourcePassword());
}
properties.put("payload", message);
TaskLaunchRequest request = new TaskLaunchRequest(
this.processorProperties.getUri(), null, properties, null,
this.processorProperties.getApplicationName());
System.out.println("inside task launcher **************************");
System.out.println(request.toString() +"**************************");
return new GenericMessage<>(request);
}
#ServiceActivator(inputChannel = Processor.INPUT,outputChannel = MyProcessor.intermidiate)
#Bean
public MessageHandler aggregator() {
AggregatingMessageHandler aggregatingMessageHandler =
new AggregatingMessageHandler(new DefaultAggregatingMessageGroupProcessor(),
new SimpleMessageStore(10));
AggregatorFactoryBean aggregatorFactoryBean = new AggregatorFactoryBean();
//aggregatorFactoryBean.setMessageStore();
//aggregatingMessageHandler.setOutputChannel(processor.output());
//aggregatorFactoryBean.setDiscardChannel(processor.output());
aggregatingMessageHandler.setSendPartialResultOnExpiry(true);
aggregatingMessageHandler.setSendTimeout(1000L);
aggregatingMessageHandler.setCorrelationStrategy(new ExpressionEvaluatingCorrelationStrategy("'FOO'"));
aggregatingMessageHandler.setReleaseStrategy(new MessageCountReleaseStrategy(3)); //ExpressionEvaluatingReleaseStrategy("size() == 5")
aggregatingMessageHandler.setExpireGroupsUponCompletion(true);
aggregatingMessageHandler.setGroupTimeoutExpression(new ValueExpression<>(3000L)); //size() ge 2 ? 5000 : -1
aggregatingMessageHandler.setExpireGroupsUponTimeout(true);
return aggregatingMessageHandler;
}
To pass the message between aggregator and task launcher method (setupRequest(String message)) , i am using a channel MyProcessor.intermidiate defined as below
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.stereotype.Indexed;
public interface MyProcessor {
String intermidiate = "intermidiate";
#Output("intermidiate")
MessageChannel intermidiate();
}
Applicaion.properties used is below
aggregator.message-store-type=persistentMessageStore
spring.cloud.stream.bindings.input.destination=output
spring.cloud.stream.bindings.output.destination=input
Its not working , With the above mentioned approach .
In this class if i change the channel name from my defined channel MyProcessor.intermediate to Processor.input or Processor.output than any one of the things works (based on the channel name changed to Processor.*)
I want to aggregate the messages first and than want to launch task on aggragated messages in processor, which is not happening
See here:
public Object setupRequest(String message) {
So, you expect some string as a request payload.
Your AggregatorFactoryBean use a DefaultAggregatingMessageGroupProcessor, which does exactly this:
List<Object> payloads = new ArrayList<Object>(messages.size());
for (Message<?> message : messages) {
payloads.add(message.getPayload());
}
return payloads;
So, it is definitely not a String.
It is strange that you don't show what exception happens with your configuration, but I assume you need to change setupRequest() signature to expect a List of payloads or you need to provide some custom MessageGroupProcessor to build that String from the group of messages you have aggregated.

How to maintain case during serializing using object mapper?

I have a listener that listens to a queue. The message from the queue is a json text. I need to process them and then save in a mongodb database. I have used a DTO for incoming json. The problem is I can save the data as lower case only since I have used a DTO. But, the incoming data is upper case. How can I gracefully do this using jackson/spring?
I tried #JsonGetter and #JsonSetter in the DTO. But, that didn't work. It is still saving the data as lower case.
Mini version of my code:
DTO:
public String getMessage() {
return message;
}
#JsonSetter("MESSAGE")
public void setMessage(String message){
this.message = message;
}
Datasaver:
mongoOperations.save(DTO,collectionname);
Document in database:
_id: ObjectId("5da831183852090ddc7075fb")
message: "hi"
I want the data in mongodb as:
_id: ObjectId("5da831183852090ddc7075fb")
MESSAGE: "hi"
The incoming data has key as MESSAGE.So, I would like the same to store. I would not want the DTO fields names to be in uppercase.
As per #MichaelZiober on comment above, none of the annotations related to jackson helped my need. #Field annotation of spring worked.
Should work with #JsonProperty("MESSAGE")
If not (for some reason) - you could use custom serializer for this field
class CustomStringSerializer extends JsonSerializer<String> {
#Override
public void serialize(String value, JsonGenerator jgen, SerializerProvider provider) throws IOException {
jgen.writeStartObject();
jgen.writeObjectField("MESSAGE", value);
jgen.writeEndObject();
}
}
and init mapper in this way:
ObjectMapper objectMapper = new ObjectMapper();
SimpleModule mod = new SimpleModule("message");
mod.addSerializer(String.class, new CustomStringSerializer());
objectMapper.registerModule(mod);

Validating Spring Kafka payloads

I am trying to set up a service that has both a REST (POST) endpoint and a Kafka endpoint, both of which should take a JSON representation of the request object (let's call it Foo). I would want to make sure that the Foo object is valid (via JSR-303 or whatever). So Foo might look like:
public class Foo {
#Max(10)
private int bar;
// Getter and setter boilerplate
}
Setting up the REST endpoint is easy:
#PostMapping(value = "/", produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<String> restEndpoint(#Valid #RequestBody Foo foo) {
// Do stuff here
}
and if I POST, { "bar": 9 } it processes the request, but if I post: { "bar": 99 } I get a BAD REQUEST. All good so far!
The Kafka endpoint is easy to create (along with adding a StringJsonMessageConverter() to my KafkaListenerContainerFactory so that I get JSON->Object conversion:
#KafkaListener(topics = "fooTopic")
public void kafkaEndpoint(#Valid #Payload Foo foo) {
// I shouldn't get here with an invalid object!!!
logger.debug("Successfully processed the object" + foo);
// But just to make sure, let's see if hand-validating it works
Validator validator = localValidatorFactoryBean.getValidator();
Set<ConstraintViolation<SlackMessage>> errors = validator.validate(foo);
if (errors.size() > 0) {
logger.debug("But there were validation errors!" + errors);
}
}
But no matter what I try, I can still pass invalid requests in and they process without error.
I've tried both #Valid and #Validated. I've tried adding a MethodValidationPostProcessor bean. I've tried adding a Validator to the KafkaListenerEndpointRegistrar (a la the EnableKafka javadoc):
#Configuration
public class MiscellaneousConfiguration implements KafkaListenerConfigurer {
private Logger logger = LoggerFactory.getLogger(this.getClass());
#Autowired
LocalValidatorFactoryBean validatorFactory;
#Override
public void configureKafkaListeners(KafkaListenerEndpointRegistrar registrar) {
logger.debug("Configuring " + registrar);
registrar.setMessageHandlerMethodFactory(kafkaHandlerMethodFactory());
}
#Bean
public MessageHandlerMethodFactory kafkaHandlerMethodFactory() {
DefaultMessageHandlerMethodFactory factory = new DefaultMessageHandlerMethodFactory();
factory.setValidator(validatorFactory);
return factory;
}
}
I've now spent a few days on this, and I'm running out of other ideas. Is this even possible (without writing validation into every one of my kakfa endpoints)?
Sorry for the delay; we are at SpringOne Platform this week.
The infrastructure currently does not pass a Validator into the payload argument resolver. Please open an issue on GitHub.
Spring kafka listener by default do not scan for #Valid for non Rest controller classes. For more details please refer this answer
https://stackoverflow.com/a/71859991/13898185

Error accessing Spring session data stored in Redis

In my REST controllers Spring project, I want to store Session information in Redis.
In my application.properties I have defined the following:
spring.session.store-type=redis
spring.session.redis.namespace=rdrestcore
com.xyz.redis.host=192.168.201.46
com.xyz.redis.db=0
com.xyz.redis.port=6379
com.xyz.redis.pool.min-idle=5
I also have enabled Http Redis Session with:
#Configuration
#EnableRedisHttpSession
public class SessionConfig extends AbstractHttpSessionApplicationInitializer
{}
I finally have a redis connection factory like this:
#Configuration
#EnableRedisRepositories
public class RdRedisConnectionFactory {
#Autowired
private Environment env;
#Value("${com.xyz.redis.host}")
private String redisHost;
#Value("${com.xyz.redis.db}")
private Integer redisDb;
#Value("${com.xyz.redis.port}")
private Integer redisPort;
#Value("${com.xyz.redis.pool.min-idle}")
private Integer redisPoolMinIdle;
#Bean
JedisPoolConfig jedisPoolConfig() {
JedisPoolConfig poolConfig = new JedisPoolConfig();
if(redisPoolMinIdle!=null) poolConfig.setMinIdle(redisPoolMinIdle);
return poolConfig;
}
#Bean
JedisConnectionFactory jedisConnectionFactory() {
JedisConnectionFactory jedisConFactory = new JedisConnectionFactory();
if(redisHost!=null) jedisConFactory.setHostName(redisHost);
if(redisPort!=null) jedisConFactory.setPort(redisPort);
if(redisDb!=null) jedisConFactory.setDatabase(redisDb);
jedisConFactory.setPoolConfig(jedisPoolConfig());
return jedisConFactory;
}
#Bean
public RedisTemplate<String, Object> redisTemplate() {
final RedisTemplate< String, Object > template = new RedisTemplate();
template.setConnectionFactory( jedisConnectionFactorySpring());
template.setKeySerializer( new StringRedisSerializer() );
template.setValueSerializer( new GenericJackson2JsonRedisSerializer() );
template.setHashKeySerializer(new StringRedisSerializer());
template.setHashValueSerializer( new GenericJackson2JsonRedisSerializer() );
return template;
}
}
With this configuration, the session information gets stored in Redis, but, it is serialized very strangely. I mean, the keys are readable, but the values stored are not (I query the information from a program called "Redis Desktop Manager")... for example... for a new session, I get a hash with key:
*spring:session:sessions:c1110241-0aed-4d40-9861-43553b3526cb*
and the keys this hash contains are: maxInactiveInterval, lastAccessedTime, creationTime, sessionAttr:SPRING_SECURITY_CONTEXT
but their values are all they coded like something similar to this:
\xAC\xED\x00\x05sr\x00\x0Ejava.lang.Long;\x8B\xE4\x90\xCC\x8F#\xDF\x02\x00\x01J\x00\x05valuexr\x00\x10java.lang.Number\x86\xAC\x95\x1D\x0B\x94\xE0\x8B\x02\x00\x00xp\x00\x00\x01b$G\x88*
(for the creationTime key)
and if I try to access this information from code, with the redisTemplate, it rises an exception like this one:
Exception occurred in target VM: Cannot deserialize; nested exception is
org.springframework.core.serializer.support.SerializationFailedException:
Failed to deserialize payload. Is the byte array a result of
corresponding serialization for DefaultDeserializer?; nested exception
is java.io.StreamCorruptedException: invalid stream header: 73657373
org.springframework.data.redis.serializer.SerializationException: Cannot deserialize; nested exception is
org.springframework.core.serializer.support.SerializationFailedException:
Failed to deserialize payload. Is the byte array a result of
corresponding serialization for DefaultDeserializer?; nested exception
is java.io.StreamCorruptedException: invalid stream header: 73657373
at org.springframework.data.redis.serializer.JdkSerializationRedisSerializer.deserialize(JdkSerializationRedisSerializer.java:82)
I think it is some kind of problem with the serialization/deserialization of the Spring session information, but I don't know what else to do to be able to control that.
Does anyone know what Im doing wrong?
Thank you
You're on the right track, your problem is serialization indeed. Try this configuration (configure your template with these serializers only):
template.setHashValueSerializer(new JdkSerializationRedisSerializer());
template.setHashKeySerializer(new StringRedisSerializer());
template.setKeySerializer(new StringRedisSerializer());
template.setDefaultSerializer(new JdkSerializationRedisSerializer());

Resources