AggregatingReplyingKafkaTemplate releaseStrategy Question - spring

There seem to be an issue when I use AggregatingReplyingKafkaTemplate with template.setReturnPartialOnTimeout(true) in that, it returns timeout exception even if partial results are available from consumers.
In example below, I have 3 consumers to reply to the request topic and i've set the reply timeout at 10 seconds. I've explicitly delayed the response of Consumer 3 to 11 seconds, however, I expect the response back from Consumer 1 and 2, so, I can return partial results. However, I am getting KafkaReplyTimeoutException. Appreciate your inputs. Thanks.
I follow the code based on the Unit Test below.
[ReplyingKafkaTemplateTests][1]
I've provided the actual code below:
#RestController
public class SumController {
#Value("${kafka.bootstrap-servers}")
private String bootstrapServers;
public static final String D_REPLY = "dReply";
public static final String D_REQUEST = "dRequest";
#ResponseBody
#PostMapping(value="/sum")
public String sum(#RequestParam("message") String message) throws InterruptedException, ExecutionException {
AggregatingReplyingKafkaTemplate<Integer, String, String> template = aggregatingTemplate(
new TopicPartitionOffset(D_REPLY, 0), 3, new AtomicInteger());
String resultValue ="";
String currentValue ="";
try {
template.setDefaultReplyTimeout(Duration.ofSeconds(10));
template.setReturnPartialOnTimeout(true);
ProducerRecord<Integer, String> record = new ProducerRecord<>(D_REQUEST, null, null, null, message);
RequestReplyFuture<Integer, String, Collection<ConsumerRecord<Integer, String>>> future =
template.sendAndReceive(record);
future.getSendFuture().get(5, TimeUnit.SECONDS); // send ok
System.out.println("Send Completed Successfully");
ConsumerRecord<Integer, Collection<ConsumerRecord<Integer, String>>> consumerRecord = future.get(10, TimeUnit.SECONDS);
System.out.println("Consumer record size "+consumerRecord.value().size());
Iterator<ConsumerRecord<Integer, String>> iterator = consumerRecord.value().iterator();
while (iterator.hasNext()) {
currentValue = iterator.next().value();
System.out.println("response " + currentValue);
System.out.println("Record header " + consumerRecord.headers().toString());
resultValue = resultValue + currentValue + "\r\n";
}
} catch (Exception e) {
System.out.println("Error Message is "+e.getMessage());
}
return resultValue;
}
public AggregatingReplyingKafkaTemplate<Integer, String, String> aggregatingTemplate(
TopicPartitionOffset topic, int releaseSize, AtomicInteger releaseCount) {
//Create Container Properties
ContainerProperties containerProperties = new ContainerProperties(topic);
containerProperties.setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
//Set the consumer Config
//Create Consumer Factory with Consumer Config
DefaultKafkaConsumerFactory<Integer, Collection<ConsumerRecord<Integer, String>>> cf =
new DefaultKafkaConsumerFactory<>(consumerConfigs());
//Create Listener Container with Consumer Factory and Container Property
KafkaMessageListenerContainer<Integer, Collection<ConsumerRecord<Integer, String>>> container =
new KafkaMessageListenerContainer<>(cf, containerProperties);
// container.setBeanName(this.testName);
AggregatingReplyingKafkaTemplate<Integer, String, String> template =
new AggregatingReplyingKafkaTemplate<>(new DefaultKafkaProducerFactory<>(producerConfigs()), container,
(list, timeout) -> {
releaseCount.incrementAndGet();
return list.size() == releaseSize;
});
template.setSharedReplyTopic(true);
template.start();
return template;
}
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,bootstrapServers);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "test_id");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
return props;
}
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
// list of host:port pairs used for establishing the initial connections to the Kakfa cluster
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
org.apache.kafka.common.serialization.StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringSerializer.class);
return props;
}
public ProducerFactory<Integer,String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#KafkaListener(id = "def1", topics = { D_REQUEST}, groupId = "D_REQUEST1")
#SendTo // default REPLY_TOPIC header
public String dListener1(String in) throws InterruptedException {
return "First Consumer : "+ in.toUpperCase();
}
#KafkaListener(id = "def2", topics = { D_REQUEST}, groupId = "D_REQUEST2")
#SendTo // default REPLY_TOPIC header
public String dListener2(String in) throws InterruptedException {
return "Second Consumer : "+ in.toLowerCase();
}
#KafkaListener(id = "def3", topics = { D_REQUEST}, groupId = "D_REQUEST3")
#SendTo // default REPLY_TOPIC header
public String dListener3(String in) throws InterruptedException {
Thread.sleep(11000);
return "Third Consumer : "+ in;
}
}
'''
[1]: https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/test/java/org/springframework/kafka/requestreply/ReplyingKafkaTemplateTests.java

template.setReturnPartialOnTimeout(true) simply means the template will consult the release strategy on timeout (with the timeout argument = true, to tell the strategy it's a timeout rather than a delivery call).
It must return true to release the partial result.
This is to allow you to look at (and possibly modify) the list to decide whether you want to release or discard.
Your strategy ignores the timeout parameter:
(list, timeout) -> {
releaseCount.incrementAndGet();
return list.size() == releaseSize;
});
You need return timeout ? true : { ... }.

Related

Handling exception from webservices called by OpenFeign

I have several microservices and I use openfeign to call the different micro services.
the entry point for the global application is named dispatcher-ws. His role is to dispatch calls depending on the payload.
As entry I do have the following payload:
{
"operation": "signature",
"clientId": "abcdef",
...
"pdfDocument": "JVBERi0xLjMNCiXi48/TDQoNCjEg..."
}
I have microservice named signature-ws that handles pdf signature. So far, so good. I implemented my client this way:
#FeignClient(name="signature-ws", decode404 = true, url = "http://localhost:8080/signature-ws/api")
public interface SignatureClient {
#PostMapping("/signature")
Map<String, Object> signDocument(RequestDto request) throws AppServiceException;
}
In my service layer, I try to make the call depending on operation value:
#Service
public class RequestServiceImpl implements DispatchService {
private final RequestRepository requestRepository;
private final SignatureClient signatureClient;
private final Resilience4JCircuitBreakerFactory circuitBreakerFactory;
#Autowired
public DispatchServiceImpl(RequestRepository requestRepository,
SignatureClient signatureClient,
Resilience4JCircuitBreakerFactory circuitBreakerFactory) {
this.requestRepository = requestRepository;
this.signatureClient = signatureClient;
this.circuitBreakerFactory = circuitBreakerFactory;
}
#Override
public RequestDto handleRequest(RequestDto request) {
RequestDto returnValue = new RequestDto();
// if not initialized, throw null pointer exception...
returnValue.setPayloads(new ArrayList<>());
if(request.getOperation().equals("signature") {
Resilience4JCircuitBreaker circuitBreaker = circuitBreakerFactory.create("signature");
Supplier<Map<String, Object>> signatureResponseSupplier =
() -> signatureClient.signDocument(request);
Map<String, Object> signatureResponse = circuitBreaker.run(
signatureResponseSupplier,
throwable -> handleException()
);
...
returnValue.getResponses().add(signatureResponse)
}
return retunValue;
}
...
private Map<String, Object> handleException() {
Map<String, Object> returnValue = new HashMap<>();
returnValue.put("Error", "Error rmessage ... ");
returnValue.put("status", "Failure");
return returnValue;
}
If i don't pass pdfDocument in signature webservice, I do retrieve an exception.
{
"errorId": "Qe99DwntFrMPCAfuZfDQW1ucwNh5BK",
"status": "ERROR",
"operations": "signature",
"profile": "client123456",
"errorMessage": "PDF is missing",
"createdAt": 1647354022127
}
I would like to retrieve the exception response and pass the key values to the map in the handleException method. At this stage, it doesn't return anything and worse of all, i do return a 200 status.
I implemented a controllerAdvice that manage the response to return. This class is identical in all my web services(i should think about creating a micro service for handling all exceptions...)
#ControllerAdvice(basePackages = { "com.company.app" })
public class AppExceptionsHandler {
private final RequestContext requestContext;
#Autowired
public AppExceptionsHandler(RequestContext requestContext) {
this.requestContext = requestContext;
}
#ExceptionHandler(value = {AppServiceException.class})
public ResponseEntity<Object> handleAppException(AppServiceException ex,
WebRequest request) {
// récupérer le body
DispatchDto response = requestContext.getResponse();
ErrorMessage errorMessage = ErrorMessage.builder()
.errorId(response.getId())
.status(RequestOperationStatus.ERROR.name())
.operations(response.getOperations())
.profile(response.getProfile())
.errorMessage(ex.getMessage())
.createdAt(new Date())
.build();
}
return new ResponseEntity<>(errorMessage, new HttpHeaders(), HttpStatus.INTERNAL_SERVER_ERROR);
}
}
What i expect is to return the same exception in my dispatcher microservice.
I found a trick to solve this issue.
First I surrounded with a try catch my feign request :
try {
...
Map<String, Object> facturxResponse =
facturXClient.createFacturX(dispatchDto);
...
} catch(FeignException e) {
System.out.println(e.getMessage());
throw new AppServiceException(e.getMessage());
}
I noted that e.getMessage returns a string which has this pattern:
[500 Internal Server Error] during [POST] to [http://localhost:8080/my-ws/api/ws]
[FacturXClient#createFacturX(DispatchDto)]: [{"errorId":"z3o1bE8SJrm8WGrxlpIIWe6TNf0NzR","status":"ERROR","operations":"facturx","profile":"client123456","errorMessage":"PDF is missing","createdAt":1647422337344}]
I throw this exception and intercept the response
#ExceptionHandler(value = {AppServiceException.class})
public ResponseEntity<Object> handleUserServiceException(AppServiceException ex,
WebRequest request) throws JsonProcessingException {
String input = ex.getMessage();
String[] splitResponse = input.split(":", 4);
ObjectMapper mapper = new ObjectMapper().enable(SerializationFeature.INDENT_OUTPUT);
String response = splitResponse[3].trim().substring(1, splitResponse[3].trim().length() -1);
ErrorMessage errorMessage = mapper.readValue(response, ErrorMessage.class);
System.out.println(errorMessage.toString());
return new ResponseEntity<>(errorMessage, new HttpHeaders(), HttpStatus.INTERNAL_SERVER_ERROR);
}
I finally get the expected response:
{
"errorId": "z3o1bE8SJrm8WGrxlpIIWe6TNf0NzR",
"status": "ERROR",
"operations": "facturx",
"profile": "client123456",
"errorMessage": "PDF is missing",
"createdAt": "2022-03-16T09:18:57.344+00:00"
}

Spring integration TCP Server multiple connections of more than 5

I'm using the following version of Spring Boot and Spring integration now.
spring.boot.version 2.3.4.RELEASE
spring-integration 5.3.2.RELEASE
My requirement is to create a TCP client server communication and i'm using spring integration for the same. The spike works fine for a single communication between client and server and also works fine for exactly 5 concurrent client connections.
The moment i have increased the concurrent client connections from 5 to any arbitary numbers, it doesn't work but the TCP server accepts only 5 connections.
I have used the 'ThreadAffinityClientConnectionFactory' mentioned by #Gary Russell in one of the earlier comments ( for similar requirements ) but still doesn't work.
Below is the code i have at the moment.
#Slf4j
#Configuration
#EnableIntegration
#IntegrationComponentScan
public class SocketConfig {
#Value("${socket.host}")
private String clientSocketHost;
#Value("${socket.port}")
private Integer clientSocketPort;
#Bean
public TcpOutboundGateway tcpOutGate(AbstractClientConnectionFactory connectionFactory) {
TcpOutboundGateway gate = new TcpOutboundGateway();
//connectionFactory.setTaskExecutor(taskExecutor());
gate.setConnectionFactory(clientCF());
return gate;
}
#Bean
public TcpInboundGateway tcpInGate(AbstractServerConnectionFactory connectionFactory) {
TcpInboundGateway inGate = new TcpInboundGateway();
inGate.setConnectionFactory(connectionFactory);
inGate.setRequestChannel(fromTcp());
return inGate;
}
#Bean
public MessageChannel fromTcp() {
return new DirectChannel();
}
// Outgoing requests
#Bean
public ThreadAffinityClientConnectionFactory clientCF() {
TcpNetClientConnectionFactory tcpNetClientConnectionFactory = new TcpNetClientConnectionFactory(clientSocketHost, serverCF().getPort());
tcpNetClientConnectionFactory.setSingleUse(true);
ThreadAffinityClientConnectionFactory threadAffinityClientConnectionFactory = new ThreadAffinityClientConnectionFactory(
tcpNetClientConnectionFactory);
// Tested with the below too.
// threadAffinityClientConnectionFactory.setTaskExecutor(taskExecutor());
return threadAffinityClientConnectionFactory;
}
// Incoming requests
#Bean
public AbstractServerConnectionFactory serverCF() {
log.info("Server Connection Factory");
TcpNetServerConnectionFactory tcpNetServerConnectionFactory = new TcpNetServerConnectionFactory(clientSocketPort);
tcpNetServerConnectionFactory.setSerializer(new CustomSerializer());
tcpNetServerConnectionFactory.setDeserializer(new CustomDeserializer());
tcpNetServerConnectionFactory.setSingleUse(true);
return tcpNetServerConnectionFactory;
}
#Bean
public TaskExecutor taskExecutor () {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(50);
executor.setMaxPoolSize(100);
executor.setQueueCapacity(50);
executor.setAllowCoreThreadTimeOut(true);
executor.setKeepAliveSeconds(120);
return executor;
}
}
Did anyone had the same issue with having multiple concurrent Tcp client connections of more than 5 ?
Thanks
Client Code:
#Component
#Slf4j
#RequiredArgsConstructor
public class ScheduledTaskService {
// Timeout in milliseconds
private static final int SOCKET_TIME_OUT = 18000;
private static final int BUFFER_SIZE = 32000;
private static final int ETX = 0x03;
private static final String HEADER = "ABCDEF ";
private static final String data = "FIXED DARATA"
private final AtomicInteger atomicInteger = new AtomicInteger();
#Async
#Scheduled(fixedDelay = 100000)
public void sendDataMessage() throws IOException, InterruptedException {
int numberOfRequests = 10;
Callable<String> executeMultipleSuccessfulRequestTask = () -> socketSendNReceive();
final Collection<Callable<String>> callables = new ArrayList<>();
IntStream.rangeClosed(1, numberOfRequests).forEach(i-> {
callables.add(executeMultipleSuccessfulRequestTask);
});
ExecutorService executorService = Executors.newFixedThreadPool(numberOfRequests);
List<Future<String>> taskFutureList = executorService.invokeAll(callables);
List<String> strings = taskFutureList.stream().map(future -> {
try {
return future.get(20000, TimeUnit.MILLISECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
} catch (TimeoutException e) {
e.printStackTrace();
}
return "";
}).collect(Collectors.toList());
strings.forEach(string -> log.info("Message received from the server: {} ", string));
}
public String socketSendNReceive() throws IOException{
int requestCounter = atomicInteger.incrementAndGet();
String host = "localhost";
int port = 8000;
Socket socket = new Socket();
InetSocketAddress address = new InetSocketAddress(host, port);
socket.connect(address, SOCKET_TIME_OUT);
socket.setSoTimeout(SOCKET_TIME_OUT);
//Send the message to the server
OutputStream os = socket.getOutputStream();
BufferedOutputStream bos = new BufferedOutputStream(os);
bos.write(HEADER.getBytes());
bos.write(data.getBytes());
bos.write(ETX);
bos.flush();
// log.info("Message sent to the server : {} ", envio);
//Get the return message from the server
InputStream is = socket.getInputStream();
String response = receber(is);
log.info("Received response");
return response;
}
private String receber(InputStream in) throws IOException {
final StringBuffer stringBuffer = new StringBuffer();
int readLength;
byte[] buffer;
buffer = new byte[BUFFER_SIZE];
do {
if(Objects.nonNull(in)) {
log.info("Input Stream not null");
}
readLength = in.read(buffer);
log.info("readLength : {} ", readLength);
if(readLength > 0){
stringBuffer.append(new String(buffer),0,readLength);
log.info("String ******");
}
} while (buffer[readLength-1] != ETX);
buffer = null;
stringBuffer.deleteCharAt(resposta.length()-1);
return stringBuffer.toString();
}
}
Since you are opening the connections all at the same time, you need to increase the backlog property on the server connection factory.
It defaults to 5.
/**
* The number of sockets in the connection backlog. Default 5;
* increase if you expect high connection rates.
* #param backlog The backlog to set.
*/
public void setBacklog(int backlog) {

Spring Kafka - how to fetch timestamp (event time) when message was produced

I have a requirement to fetch timestamp (event-time) when the message was produced, in the kafka consumer application. I am aware of the timestampExtractor, which can be used with kafka stream , but my requirement is different as I am not using stream to consume message.
My kafka producer is as follows :
#Override
public void run(ApplicationArguments args) throws Exception {
List<String> names = Arrays.asList("priya", "dyser", "Ray", "Mark", "Oman", "Larry");
List<String> pages = Arrays.asList("blog", "facebook", "instagram", "news", "youtube", "about");
Runnable runnable = () -> {
String rPage = pages.get(new Random().nextInt(pages.size()));
String rName = pages.get(new Random().nextInt(names.size()));
PageViewEvent pageViewEvent = new PageViewEvent(rName, rPage, Math.random() > .5 ? 10 : 1000);
Message<PageViewEvent> message = MessageBuilder
.withPayload(pageViewEvent).
setHeader(KafkaHeaders.MESSAGE_KEY, pageViewEvent.getUserId().getBytes())
.build();
try {
this.pageViewsOut.send(message);
log.info("sent " + message);
} catch (Exception e) {
log.error(e);
}
};
Kafka Consumer is implemented using Spring kafka #KafkaListener.
#KafkaListener(topics = "test1" , groupId = "json", containerFactory = "kafkaListenerContainerFactory")
public void receive(#Payload PageViewEvent data,#Headers MessageHeaders headers) {
LOG.info("Message received");
LOG.info("received data='{}'", data);
}
Container factory configuration
#Bean
public ConsumerFactory<String,PageViewEvent > priceEventConsumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "json");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new JsonDeserializer<>(PageViewEvent.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, PageViewEvent> priceEventsKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, PageViewEvent> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(priceEventConsumerFactory());
return factory;
}
The producer, which is sending the message when I print give me below data :
[payload=PageViewEvent(userId=blog, page=about, duration=10),
headers={id=8ebdad85-e2f7-958f-500e-4560ac0970e5,
kafka_messageKey=[B#71975e1a, contentType=application/json,
timestamp=1553041963803}]
This does have a produced timestamp. How can I fetch the message produced time stamp with Spring kafka?
RECEIVED_TIMESTAMP means it is the time stamp from the record that was received not the time it was received.. We avoid putting it in TIMESTAMP to avoid inadvertent propagation to an outbound message.
You can use something like below:
final Producer<String, String> producer = new KafkaProducer<String, String>(properties);
long time = System.currentTimeMillis();
final CountDownLatch countDownLatch = new CountDownLatch(5);
int count=0;
try {
for (long index = time; index < time + 10; index++) {
String key = null;
count++;
if(count<=5)
key = "id_"+ Integer.toString(1);
else
key = "id_"+ Integer.toString(2);
final ProducerRecord<String, String> record =
new ProducerRecord<>(TOPIC, key, "B2B Sample Message: " + count);
producer.send(record, (metadata, exception) -> {
long elapsedTime = System.currentTimeMillis() - time;
if (metadata != null) {
System.out.printf("sent record(key=%s value=%s) " +
"meta(partition=%d, offset=%d) time=%d timestamp=%d\n",
record.key(), record.value(), metadata.partition(),
metadata.offset(), elapsedTime, metadata.timestamp());
System.out.println("Timestamp:: "+metadata.timestamp() );
} else {
exception.printStackTrace();
}
countDownLatch.countDown();
});
}
try {
countDownLatch.await(25, TimeUnit.SECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
}finally {
producer.flush();
producer.close();
}
}

Failed to flush state store

I'm trying to create a leftJoin in Kafka Streams which works fine for about 10 Records and then it crashes with an exception caused by a NullPointerException with such code:
private static KafkaStreams getKafkaStreams() {
StreamsConfig streamsConfig = new StreamsConfig(getProperties());
KStreamBuilder builder = new KStreamBuilder();
KTable<String, Verkaeufer> umsatzTable = builder.table(Serdes.String(), EventstreamSerde.Verkaeufer(), CommonUtilsConstants.TOPIC_VERKAEUFER_STAMMDATEN);
KStream<String, String> verkaeuferStream = builder.stream(CommonUtilsConstants.TOPIC_ANZAHL_UMSATZ_PER_VERKAEUFER);
KStream<String, String> tuttiStream = verkaeuferStream.leftJoin(umsatzTable,
(tutti, verkaeufer) -> ("Vorname=" + verkaeufer.getVorname().toString() +",Nachname=" +verkaeufer.getNachname().toString() +"," +tutti.toString()), Serdes.String(), Serdes.String());
tuttiStream.to(Serdes.String(), Serdes.String(), CommonUtilsConstants.TOPIC_TUTTI);
return new KafkaStreams(builder, streamsConfig);
}
StreamsConfig looks like this:
private static Properties getProperties() {
Properties props = new Properties();
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, CommonUtilsConstants.BOOTSTRAP_SERVER_CONFIGURATION);
props.put(StreamsConfig.APPLICATION_ID_CONFIG, CommonUtilsConstants.GID_TUTTI);
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG,Serdes.String().getClass());
props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, "1000");
return props;
}
Full Stack Trace:
22:19:36.550 [gid-tutti-8fe6be58-d5c5-41ce-982d-88081b98004e-StreamThread-1] ERROR o.a.k.s.p.internals.StreamThread - stream-thread [gid-tutti-8fe6be58-d5c5-41ce-982d-88081b98004e-StreamThread-1] Failed to commit StreamTask 0_0 state: org.apache.kafka.streams.errors.ProcessorStateException: task [0_0] Failed to flush state store KTABLE-SOURCE-STATE-STORE-0000000000
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:262)
at org.apache.kafka.streams.processor.internals.AbstractTask.flushState(AbstractTask.java:190)
at org.apache.kafka.streams.processor.internals.StreamTask.flushState(StreamTask.java:282)
at org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:264)
at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
at org.apache.kafka.streams.processor.internals.StreamTask.commitImpl(StreamTask.java:259)
at org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:253)
at org.apache.kafka.streams.processor.internals.StreamThread.commitOne(StreamThread.java:815)
at org.apache.kafka.streams.processor.internals.StreamThread.access$2800(StreamThread.java:73)
at org.apache.kafka.streams.processor.internals.StreamThread$2.apply(StreamThread.java:797)
at org.apache.kafka.streams.processor.internals.StreamThread.performOnStreamTasks(StreamThread.java:1448)
at org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:789)
at org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:778)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:567)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:527) Caused by: java.lang.NullPointerException: null
at java.lang.String.<init>(String.java:143)
at ch.wesr.eventstream.commonutils.serde.GsonDeserializer.deserialize(GsonDeserializer.java:38)
at org.apache.kafka.streams.state.StateSerdes.valueFrom(StateSerdes.java:163)
at org.apache.kafka.streams.state.internals.CachingKeyValueStore.putAndMaybeForward(CachingKeyValueStore.java:90)
at org.apache.kafka.streams.state.internals.CachingKeyValueStore.access$000(CachingKeyValueStore.java:34)
at org.apache.kafka.streams.state.internals.CachingKeyValueStore$1.apply(CachingKeyValueStore.java:78)
at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:145)
at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:103)
at org.apache.kafka.streams.state.internals.ThreadCache.flush(ThreadCache.java:97)
at org.apache.kafka.streams.state.internals.CachingKeyValueStore.flush(CachingKeyValueStore.java:107)
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:260)
... 14 common frames omitted
Update:
This is what GsonDeserialize looks like
public class GsonDeserializer<T> implements Deserializer<T>{
public static final String CONFIG_VALUE_CLASS = "default.value.deserializer.class";
public static final String CONFIG_KEY_CLASS = "default.key.deserializer.class";
private Class<T> deserializedClass;
private Gson gson = new GsonBuilder().create();
public GsonDeserializer() {}
#Override
public void configure(Map<String, ?> config, boolean isKey) {
String configKey = isKey ? CONFIG_KEY_CLASS : CONFIG_VALUE_CLASS;
String clsName = String.valueOf(config.get(configKey));
try {
if (deserializedClass == null) {
deserializedClass = (Class<T>) Class.forName(clsName);
}
} catch (ClassNotFoundException e) {
System.err.printf("Failed to configure GsonDeserializer. " +
"Did you forget to specify the '%s' property ?%n",
configKey);
System.out.println(e.getMessage());
}
}
#Override
public T deserialize(String s, byte[] bytes) {
return gson.fromJson(new String(bytes), deserializedClass);
}
#Override
public void close() {}
}
As long as the cache is not flushed, your deserializer is never called. That's why it doesn't fail in the beginning and you can increase the time until it fails via cache size parameter and commit interval (we flush on commit).
Looking at your code for GsonDeserializer, it seems that new String(bytes) fails with NPE -- String constructor cannot take null as parameter -- your deserializer code must guard against bytes==null and should return null for this case directly.

send multiple jms messages in one transaction

I have to send a message to 2 different queues(queue1 and queue2). However, i want to rollback, if the send is failed for any of the queue(queue1 or queue2).
my source code looks as follows. can anyone through some inputs on this.
public void sendMessage(final Map<String, String> mapMessage) {
jmsTemplate.send(queue1, session -> {
MapMessage message = session.createMapMessage();
Iterator<Entry<String, String>> it = mapMessage.entrySet().iterator();
while (it.hasNext()) {
Map.Entry<String, String> pair = it.next();
message.setStringProperty(pair.getKey(), pair.getValue());
}
message.setJMSRedelivered(true);
message.setJMSCorrelationID(UUID.randomUUID().toString().replaceAll("-", ""));
return message;
});
jmsTemplate.send(queue2, session -> {
MapMessage message = session.createMapMessage();
Iterator<Entry<String, String>> it = mapMessage.entrySet().iterator();
while (it.hasNext()) {
Map.Entry<String, String> pair = it.next();
message.setStringProperty(pair.getKey(), pair.getValue());
}
message.setJMSRedelivered(true);
message.setJMSCorrelationID(UUID.randomUUID().toString().replaceAll("-", ""));
return message;
});
}
Start a transaction before entering the sendMessage method, e.g. with #Transactional - see the Spring Framework Reference Manual.

Resources