How to disable logging all messages in a Kafka batch in case of an exception? - spring

When using #KafkaListener with batches, the error handler logs the content of the full batch (all messages) in case of an exception.
How can I make this less verbose? I'd like to avoid spamming the log files with all the messages and only see the actual exception.
Here is a minimal example of how my consumer currently looks like:
#Component
class TestConsumer {
#Bean
fun kafkaBatchListenerContainerFactory(kafkaProperties: KafkaProperties): ConcurrentKafkaListenerContainerFactory<String, String> {
val configs = kafkaProperties.buildConsumerProperties()
configs[ConsumerConfig.MAX_POLL_RECORDS_CONFIG] = 10000
val factory = ConcurrentKafkaListenerContainerFactory<String, String>()
factory.consumerFactory = DefaultKafkaConsumerFactory(configs)
factory.isBatchListener = true
return factory
}
#KafkaListener(
topics = ["myTopic"],
containerFactory = "kafkaBatchListenerContainerFactory"
)
fun batchListen(values: List<ConsumerRecord<String, String>>) {
// Something that might throw an exception in rare cases.
}
}

What version are you using?
This container property was added in 2.2.14.
/**
* Set to false to log {#code record.toString()} in log messages instead
* of {#code topic-partition#offset}.
* #param onlyLogRecordMetadata false to log the entire record.
* #since 2.2.14
*/
public void setOnlyLogRecordMetadata(boolean onlyLogRecordMetadata) {
this.onlyLogRecordMetadata = onlyLogRecordMetadata;
}
It has been true by default since version 2.7 (which is why the javadocs now read that way).
This was the previous javadoc:
/**
* Set to true to only log {#code topic-partition#offset} in log messages instead
* of {#code record.toString()}.
* #param onlyLogRecordMetadata true to only log the topic/parrtition/offset.
* #since 2.2.14
*/
Also, starting with version 2.5, you can set the log level on the error handler:
/**
* Set the level at which the exception thrown by this handler is logged.
* #param logLevel the level (default ERROR).
*/
public void setLogLevel(KafkaException.Level logLevel) {
Assert.notNull(logLevel, "'logLevel' cannot be null");
this.logLevel = logLevel;
}

Related

Caused by: java.net.SocketException: Connection reset by peer: socket write error

I'm trying to connect to rabbitMQ over SSL using Spring Boot 2.7.4 and java 11.0.14 I was following this example here:
I have added the following configurations:
properties file:
# RabbitMQ Server configuration file.
rabbit.username=admin
rabbit.password=admin
rabbit.host=localhost
rabbit.port=5671
rabbit.ssl=TLSv1.2
rabbit.keystore.name=client_key.p12
rabbit.keystore.password=rabbitstore
rabbit.truststore=server_store.jks
rabbit.truststore.password=rabbitstore
client_key.p12 and server_store.jks are in my classpath.
Configuration Class:
#Configuration
#PropertySource("classpath:rabbit.properties")
public class RabbitConfiguration {
/**
* Default sample channel name to respond for requests from clients.
*/
public static final String DEFAULT_QUEUE = "sample_queue";
/**
* Environment properties file from rabbitmq configuration.
*/
#Autowired
private Environment env;
/**
* Establish a connection to a rabbit mq server.
* #return Rabbit connection factory for rabbitmq access.
* #throws IOException If wrong parameters are used for connection.
*/
#Bean
public RabbitConnectionFactoryBean connectionFactoryBean() throws IOException {
RabbitConnectionFactoryBean connectionFactoryBean = new RabbitConnectionFactoryBean();
connectionFactoryBean.setHost(Objects.requireNonNull(env.getProperty("rabbit.host")));
connectionFactoryBean.setPort(Integer.parseInt(Objects.requireNonNull(env.getProperty("rabbit.port"))));
connectionFactoryBean.setUsername(Objects.requireNonNull(env.getProperty("rabbit.username")));
connectionFactoryBean.setPassword(Objects.requireNonNull(env.getProperty("rabbit.password")));
// SSL-Configuration if set
if(env.getProperty("rabbit.ssl") != null) {
connectionFactoryBean.setUseSSL(true);
connectionFactoryBean.setSslAlgorithm(Objects.requireNonNull(env.getProperty("rabbit.ssl")));
// This information should be stored safely !!!
connectionFactoryBean.setKeyStore(Objects.requireNonNull(env.getProperty("rabbit.keystore.name")));
connectionFactoryBean.setKeyStorePassphrase(Objects.requireNonNull(env.getProperty("rabbit.keystore.password")));
connectionFactoryBean.setTrustStore(Objects.requireNonNull(env.getProperty("rabbit.truststore")));
connectionFactoryBean.setTrustStorePassphrase(Objects.requireNonNull(env.getProperty("rabbit.truststore.password")));
}
return connectionFactoryBean;
}
/**
* Connection factory which established a rabbitmq connection used from a connection factory
* #param connectionFactoryBean Connection factory bean to create connection.
* #return A connection factory to create connections.
* #throws Exception If wrong parameters are used for connection.
*/
#Bean(name = "GEO_RABBIT_CONNECTION")
public ConnectionFactory connectionFactory(RabbitConnectionFactoryBean connectionFactoryBean) throws Exception {
return new CachingConnectionFactory(Objects.requireNonNull(connectionFactoryBean.getObject()));
}
/**
* Queue initialization from rabbitmq to listen a queue.
* #return An queue to listen for listen receiver.
*/
#Bean
public Queue queue() {
// Create an new queue to handle incoming responds
return new Queue(DEFAULT_QUEUE, false, false, false, null);
}
/**
* Generates a simple message listener container.
* #param connectionFactory Established connection to rabbitmq server.
* #param listenerAdapter Listener event adapter to listen for messages.
* #return A simple message container for listening for requests.
*/
#Bean
public SimpleMessageListenerContainer container(ConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(DEFAULT_QUEUE);
container.setMessageListener(listenerAdapter);
container.setAcknowledgeMode(AcknowledgeMode.AUTO);
return container;
}
/**
* Message listener adapter to generate a message listener.
* #param deviceMonitoringReceiver Device receive to for listening.
* #return A message listener adapter to receive messages.
*/
#Bean
public MessageListenerAdapter listenerAdapter(DeviceMonitoringReceiver deviceMonitoringReceiver) {
return new MessageListenerAdapter(deviceMonitoringReceiver, "receiveMessage");
}
}
Also I have updated rabbitMQ configurations:
[
{rabbit, [
{ssl_listeners, [5671]},
{ssl_options, [{cacertfile, "D:\\tls-gen\\basic\\result\\ca_certificate.pem"},
{certfile, "D:\\tls-gen\\basic\\result\\server_seliiwvdec53152_certificate.pem"},
{keyfile, "D:\\tls-gen\basic\\result\\server_seliiwvdec53152_key.pem"},
{verify, verify_peer},
{fail_if_no_peer_cert, true}]}
]}
].
But the application is not starting and throwing
Caused by: java.net.SocketException: Connection reset by peer: socket write error
I resolved the issue by adding this to the configurations:
ssl_options.password = xxx
It's mentioned in the official documentation it's optional I don't know why. But whatever the issue is now resolved.

How to periodically publish a message to activemq spring integration DSL

I installed Active MQ and periodically say every 10 seconds want to send a message to "my.queue"
I'm struggling to comprehend Spring Integration DSL language.
I need something like
IntegrationFlows.from(every 5 seconds)
.send(message to "my.queue")
Yes, you can do that with Spring Integration Java DSL and its IntegrationFlow abstraction. To make a periodic task you need to use this factory in the IntegrationFlows to start the flow:
/**
* Provides {#link Supplier} as source of messages to the integration flow.
* which will be triggered by a <b>provided</b>
* {#link org.springframework.integration.endpoint.SourcePollingChannelAdapter}.
* #param messageSource the {#link Supplier} to populate.
* #param endpointConfigurer the {#link Consumer} to provide more options for the
* {#link org.springframework.integration.config.SourcePollingChannelAdapterFactoryBean}.
* #param <T> the supplier type.
* #return new {#link IntegrationFlowBuilder}.
* #see Supplier
*/
public static <T> IntegrationFlowBuilder fromSupplier(Supplier<T> messageSource,
Consumer<SourcePollingChannelAdapterSpec> endpointConfigurer) {
The Supplier may return an object you'd like to send as a payload downstream. The second consumer arg can be configured with the:
.poller(p -> p.fixedDelay(1000))
This way every second a message is going to be created from the supplied payload and sent downstream.
To send a message ti Active MQ, you need to use a org.springframework.integration.jms.dsl.Jms and its method for respective channel adapter:
/**
* The factory to produce a {#link JmsOutboundChannelAdapterSpec}.
* #param connectionFactory the JMS ConnectionFactory to build on
* #return the {#link JmsOutboundChannelAdapterSpec} instance
*/
public static JmsOutboundChannelAdapterSpec.JmsOutboundChannelSpecTemplateAware outboundAdapter(
ConnectionFactory connectionFactory) {
The result of this factory has to be used in the DSL callback like:
/**
* Populate a {#link ServiceActivatingHandler} for the provided
* {#link MessageHandler} implementation.
* Can be used as Java 8 Lambda expression:
* <pre class="code">
* {#code
* .handle(m -> logger.info(m.getPayload())
* }
* </pre>
* #param messageHandler the {#link MessageHandler} to use.
* #return the current {#link BaseIntegrationFlowDefinition}.
*/
public B handle(MessageHandler messageHandler) {
All the info is present in the docs: https://docs.spring.io/spring-integration/reference/html/dsl.html#java-dsl
Something like this:
#Bean
public IntegrationFlow jmsPeriodicFlow() {
return IntegrationFlows.fromSupplier(() -> "hello",
e -> e.poller(p -> p.fixedDelay(5000)))
.handle(Jms.outboundAdapter(jmsConnectionFactory())
.destination("my.queue"))
.get();
}

how to set maximum number of connection retries with Spring AMQP

I have a scenario where my rabbit mq instance is not always available and would like to set the maximum number of times a connection retry happens, Is this possible with amqp?
Example,
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory factory = new CachingConnectionFactory();
factory.setUri("amqprl//");
factory ../ try uri connection for 4 times max then fail if still no connection
return factory;
}
Message producers will only try to create a connection when you send a message.
Message consumers (container factories) will retry indefinitely.
You can add a ConnectionListener to the connection factory and stop() the listener containers after some number of failures.
#FunctionalInterface
public interface ConnectionListener {
/**
* Called when a new connection is established.
* #param connection the connection.
*/
void onCreate(Connection connection);
/**
* Called when a connection is closed.
* #param connection the connection.
* #see #onShutDown(ShutdownSignalException)
*/
default void onClose(Connection connection) {
}
/**
* Called when a connection is force closed.
* #param signal the shut down signal.
* #since 2.0
*/
default void onShutDown(ShutdownSignalException signal) {
}
/**
* Called when a connection couldn't be established.
* #param exception the exception thrown.
* #since 2.2.17
*/
default void onFailed(Exception exception) {
}
}

How to test MessageChannel in Spring Integrtion?

I'm trying to know if the message passed through specific channel for test or i'd like to get the message from specific channel
So my flow is: controller -> gateway -> ServiceActivator
private final Gateway gateway;
public ResponseEntity<Map<String,String>> submit(String applicationId, ApplicationDto applicationDto) {
applicationDto.setApplicationId(applicationId);
gateway.submitApplication(applicationDto);
return ResponseEntity.ok(Map.of(MESSAGE, "Accepted submit"));
}
the gateway
#Gateway(requestChannel = "submitApplicationChannel", replyChannel = "replySubmitApplicationChannel")
WorkflowPayload submitApplication(ApplicationDto applicationDto);
pipeline
#Bean
MessageChannel submitApplicationChannel() {
return new DirectChannel();
}
So my test is sending a request to start the flow
#Test
#DisplayName("Application Submission")
void submissionTest() throws Exception {
mockMvc.perform(MockMvcRequestBuilders
.post("/api/v1/applications/contract-validation/" + APPLICATION_ID)
.contentType(MediaType.APPLICATION_JSON)
.content(objectMapper.writeValueAsString(payload)))
.andExpect(status().isAccepted())
.andReturn();
//Check HERE if the message passed through the channel
}
Can you give me a hand??
In your test, add a ChannelInterceptor to the submitApplicationChannel before calling the gateway.
public interface ChannelInterceptor {
/**
* Invoked before the Message is actually sent to the channel.
* This allows for modification of the Message if necessary.
* If this method returns {#code null} then the actual
* send invocation will not occur.
*/
#Nullable
default Message<?> preSend(Message<?> message, MessageChannel channel) {
return message;
}
/**
* Invoked immediately after the send invocation. The boolean
* value argument represents the return value of that invocation.
*/
default void postSend(Message<?> message, MessageChannel channel, boolean sent) {
}
/**
* Invoked after the completion of a send regardless of any exception that
* have been raised thus allowing for proper resource cleanup.
* <p>Note that this will be invoked only if {#link #preSend} successfully
* completed and returned a Message, i.e. it did not return {#code null}.
* #since 4.1
*/
default void afterSendCompletion(
Message<?> message, MessageChannel channel, boolean sent, #Nullable Exception ex) {
}
/**
* Invoked as soon as receive is called and before a Message is
* actually retrieved. If the return value is 'false', then no
* Message will be retrieved. This only applies to PollableChannels.
*/
default boolean preReceive(MessageChannel channel) {
return true;
}
/**
* Invoked immediately after a Message has been retrieved but before
* it is returned to the caller. The Message may be modified if
* necessary; {#code null} aborts further interceptor invocations.
* This only applies to PollableChannels.
*/
#Nullable
default Message<?> postReceive(Message<?> message, MessageChannel channel) {
return message;
}
/**
* Invoked after the completion of a receive regardless of any exception that
* have been raised thus allowing for proper resource cleanup.
* <p>Note that this will be invoked only if {#link #preReceive} successfully
* completed and returned {#code true}.
* #since 4.1
*/
default void afterReceiveCompletion(#Nullable Message<?> message, MessageChannel channel,
#Nullable Exception ex) {
}
}

How to aggregate messages with the identical headers in Spring Integration

public static void main(String[] args) throws InterruptedException {
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext();
ctx.register(Main.class);
ctx.refresh();
DirectChannel channel1 = ctx.getBean("channel1", DirectChannel.class);
ctx.getBean("channel2", PublishSubscribeChannel.class).subscribe(message ->
System.out.println("Output: " + message));
channel1.send(MessageBuilder.withPayload("p1")
.setHeader(CORRELATION_ID, 1)
.setHeader(SEQUENCE_SIZE,2)
.setHeader(SEQUENCE_NUMBER,1)
.setHeader("a", 1)
.build());
channel1.send(MessageBuilder.withPayload("p2")
.setHeader(CORRELATION_ID, 1)
.setHeader(SEQUENCE_SIZE,2)
.setHeader(SEQUENCE_NUMBER,2)
.setHeader("a", 2)
.build());
}
#Bean
public MessageChannel channel1() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel channel2() {
return MessageChannels.publishSubscribe().get();
}
#Bean
public IntegrationFlow flow1() {
return IntegrationFlows
.from("channel1")
.aggregate(a -> a
.releaseStrategy(new SequenceSizeReleaseStrategy())
.expireGroupsUponCompletion(true)
.sendPartialResultOnExpiry(true))
.channel("channel2")
.get();
}
Output: GenericMessage [payload=[p1, p2], headers={sequenceNumber=2, a=2, correlationId=1, id=b5e51041-c967-1bb4-1601-7e468ae28527, sequenceSize=2, timestamp=1580475773518}]
Headers "a" and "sequenceNumber" were overwritten.
How to aggregate messages with the identical headers?
It must be so
Output: GenericMessage [payload=[p1, p2], headers={sequenceNumber=[1,2], a=[1, 2], correlationId=1, id=b5e51041-c967-1bb4-1601-7e468ae28527, sequenceSize=2, timestamp=1580475773518}]
See AbstractAggregatingMessageGroupProcessor:
/**
* Specify a {#link Function} to map {#link MessageGroup} into composed headers for output message.
* #param headersFunction the {#link Function} to use.
* #since 5.2
*/
public void setHeadersFunction(Function<MessageGroup, Map<String, Object>> headersFunction) {
and also:
/**
* The {#link Function} implementation for a default headers merging in the aggregator
* component. It takes all the unique headers from all the messages in group and removes
* those which are conflicted: have different values from different messages.
*
* #author Artem Bilan
*
* #since 5.2
*
* #see AbstractAggregatingMessageGroupProcessor
*/
public class DefaultAggregateHeadersFunction implements Function<MessageGroup, Map<String, Object>> {
Or just long existing:
/**
* This default implementation simply returns all headers that have no conflicts among the group. An absent header
* on one or more Messages within the group is not considered a conflict. Subclasses may override this method with
* more advanced conflict-resolution strategies if necessary.
* #param group The message group.
* #return The aggregated headers.
*/
protected Map<String, Object> aggregateHeaders(MessageGroup group) {
So, what you need in your aggregate() configuration is an outputProcessor(MessageGroupProcessor outputProcessor) option.
See docs for more info: https://docs.spring.io/spring-integration/docs/5.2.3.RELEASE/reference/html/message-routing.html#aggregatingmessagehandler

Resources