Multiple Kafka Producer Instance for each Http Request - spring-boot

I have a rest end point which can be invoked by multiple users at same time. This rest end point invokes a Transactional Kafka Producer.
What I understand is I cant use same Kafka Producer instance at same time if we use Transaction.
How can I create a new Kafka Producer Instance for each HTTP request efficiently ?
//Kafka Transaction enabled
producerProps.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
producerProps.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, "prod-1-" );
#Service
public class ProducerService {
#Autowired
private KafkaTemplate<Object, Object> kafkaTemplate;
public void postMessage(final MyUser message) {
// wrapping the send method in a transaction
this.kafkaTemplate.executeInTransaction(kafkaTemplate -> {
kafkaTemplate.send("custom", null, message);
}
}

See the javadocs for the DefaultKafkaProducerFactory. It maintains a cache of producers for producer-initiated transactions.
/**
* The {#link ProducerFactory} implementation for a {#code singleton} shared {#link Producer} instance.
* <p>
* This implementation will return the same {#link Producer} instance (if transactions are
* not enabled) for the provided {#link Map} {#code configs} and optional {#link Serializer}
* implementations on each {#link #createProducer()} invocation.
...
* Setting {#link #setTransactionIdPrefix(String)} enables transactions; in which case, a
* cache of producers is maintained; closing a producer returns it to the cache. The
* producers are closed and the cache is cleared when the factory is destroyed, the
* application context stopped, or the {#link #reset()} method is called.
...
*/

Related

Kafka ConsumerInterceptor onCommit not being called when using transactions

I'm using Spring Kafka in a Spring Boot application. I'm attempting to use a Kafka ConsumerInterceptor to intercept when offsets are committed.
This seems to work producers transactions are not enabled but transactions are turned on Interceptor::onCommit is no longer called.
The following minimal example everything works as expected:
#SpringBootApplication
#EnableKafka
class Application {
#KafkaListener(topics = ["test"])
fun onMessage(message: String) {
log.warn("onMessage: $message")
}
Interceptor:
class Interceptor : ConsumerInterceptor<String, String> {
override fun onCommit(offsets: MutableMap<TopicPartition, OffsetAndMetadata>) {
log.warn("onCommit: $offsets")
}
override fun onConsume(records: ConsumerRecords<String, String>): ConsumerRecords<String, String> {
log.warn("onConsume: $records")
return records
}
}
Application config:
spring:
kafka:
consumer:
enable-auto-commit: false
auto-offset-reset: earliest
properties:
"interceptor.classes": com.example.Interceptor
group-id: test-group
listener:
ack-mode: record
Inside a test using #EmbeddedKafka:
#Test
fun sendMessage() {
kafkaTemplate.send("test", "id", "sent message").get() // block so we don't end before the consumer gets the message
}
This outputs what I would expect:
onConsume: org.apache.kafka.clients.consumer.ConsumerRecords#6a646f3c
onMessage: sent message
onCommit: {test-0=OffsetAndMetadata{offset=1, leaderEpoch=null, metadata=''}}
However, when I enabled transactions by providing a transaction-id-prefix the Interceptor's onCommit is no longer called.
My updated config only adds:
spring:
kafka:
producer:
transaction-id-prefix: tx-id-
And the test is updated to wrap send in a transaction:
#Test
fun sendMessage() {
kafkaTemplate.executeInTransaction {
kafkaTemplate.send("test", "a", "sent message").get()
}
}
With this change my log output is now only
onConsume: org.apache.kafka.clients.consumer.ConsumerRecords#738b5968
onMessage: sent message
The Interceptor's onConsume method is called and the #KafkaListener receives the message but onCommit is never called.
Does anyone happen to know whats happening here? Are my expectations about what I should see here incorrect?
Offsets are not committed via the consumer when using transactions (exactly once semantics). Instead, the offset is committed via the producer.
KafkaProducer...
/**
* Sends a list of specified offsets to the consumer group coordinator, and also marks
* those offsets as part of the current transaction. These offsets will be considered
* committed only if the transaction is committed successfully. The committed offset should
* be the next message your application will consume, i.e. lastProcessedMessageOffset + 1.
* <p>
* This method should be used when you need to batch consumed and produced messages
* together, typically in a consume-transform-produce pattern. Thus, the specified
* {#code groupMetadata} should be extracted from the used {#link KafkaConsumer consumer} via
* {#link KafkaConsumer#groupMetadata()} to leverage consumer group metadata. This will provide
* stronger fencing than just supplying the {#code consumerGroupId} and passing in {#code new ConsumerGroupMetadata(consumerGroupId)},
* however note that the full set of consumer group metadata returned by {#link KafkaConsumer#groupMetadata()}
* requires the brokers to be on version 2.5 or newer to understand.
*
* <p>
* Note, that the consumer should have {#code enable.auto.commit=false} and should
* also not commit offsets manually (via {#link KafkaConsumer#commitSync(Map) sync} or
* {#link KafkaConsumer#commitAsync(Map, OffsetCommitCallback) async} commits).
* This method will raise {#link TimeoutException} if the producer cannot send offsets before expiration of {#code max.block.ms}.
* Additionally, it will raise {#link InterruptException} if interrupted.
*
* #throws IllegalStateException if no transactional.id has been configured or no transaction has been started.
* #throws ProducerFencedException fatal error indicating another producer with the same transactional.id is active
* #throws org.apache.kafka.common.errors.UnsupportedVersionException fatal error indicating the broker
* does not support transactions (i.e. if its version is lower than 0.11.0.0) or
* the broker doesn't support latest version of transactional API with all consumer group metadata
* (i.e. if its version is lower than 2.5.0).
* #throws org.apache.kafka.common.errors.UnsupportedForMessageFormatException fatal error indicating the message
* format used for the offsets topic on the broker does not support transactions
* #throws org.apache.kafka.common.errors.AuthorizationException fatal error indicating that the configured
* transactional.id is not authorized, or the consumer group id is not authorized.
* #throws org.apache.kafka.clients.consumer.CommitFailedException if the commit failed and cannot be retried
* (e.g. if the consumer has been kicked out of the group). Users should handle this by aborting the transaction.
* #throws org.apache.kafka.common.errors.FencedInstanceIdException if this producer instance gets fenced by broker due to a
* mis-configured consumer instance id within group metadata.
* #throws org.apache.kafka.common.errors.InvalidProducerEpochException if the producer has attempted to produce with an old epoch
* to the partition leader. See the exception for more details
* #throws KafkaException if the producer has encountered a previous fatal or abortable error, or for any
* other unexpected error
* #throws TimeoutException if the time taken for sending offsets has surpassed max.block.ms.
* #throws InterruptException if the thread is interrupted while blocked
*/
public void sendOffsetsToTransaction(Map<TopicPartition, OffsetAndMetadata> offsets,
ConsumerGroupMetadata groupMetadata) throws ProducerFencedException {

Spring Apache Kafka onFailure Callback of KafkaTemplate not fired on connection error

I'm experimenting a lot with Apache Kafka in a Spring Boot App at the moment.
My current goal is to write a REST endpoint that takes in some message payload, which will use a KafkaTemplate to send the data to my local Kafka running on port 9092.
This is my producer config:
#Bean
public Map<String,Object> producerConfig() {
// config settings for creating producers
Map<String,Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,this.bootstrapServers);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
configProps.put(ProducerConfig.MAX_BLOCK_MS_CONFIG,5000);
configProps.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG,4000);
configProps.put(ProducerConfig.RETRIES_CONFIG,0);
return configProps;
}
#Bean
public ProducerFactory<String,String> producerFactory() {
// creates a kafka producer
return new DefaultKafkaProducerFactory<>(producerConfig());
}
#Bean("kafkaTemplate")
public KafkaTemplate<String,String> kafkaTemplate(){
// template which abstracts sending data to kafka
return new KafkaTemplate<>(producerFactory());
}
My rest endpoint forwards to a service, the service looks like this:
#Service
public class KafkaSenderService {
#Qualifier("kafkaTemplate")
private final KafkaTemplate<String,String> kafkaTemplate;
#Autowired
public KafkaSenderService(KafkaTemplate<String,String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
public void sendMessageWithCallback(String message, String topicName) {
// possibility to add callbacks to define what shall happen in success/ error case
ListenableFuture<SendResult<String,String>> future = kafkaTemplate.send(topicName, message);
future.addCallback(new KafkaSendCallback<String, String>() {
#Override
public void onFailure(KafkaProducerException ex) {
logger.warn("Message could not be delivered. " + ex.getMessage());
}
#Override
public void onSuccess(SendResult<String, String> result) {
logger.info("Your message was delivered with following offset: " + result.getRecordMetadata().offset());
}
});
}
}
The thing now is: I'm expecting the "onFailure()" method to get called when the message could not be sent. But this seems not to work. When I change the bootstrapServers variable in the producer config to localhost:9091 (which is the wrong port, so there should be no connection possible), the producer tries to connect to the broker. It will do several connection attempts, and after 5 seconds, a TimeOutException will occur. But the "onFailure() method won't get called. Is there a way to achieve that the "onFailure()" method can get called event if the connection cannot be established?
And by the way, I set the retries count to zero, but the prodcuer still does a second connection attempt after the first one. This is the log output:
EDIT: it seems like the Kafke producer/ KafkaTemplate goes into an infinite loop when the broker is not available. Is that really the intended behaviour?
The KafkaTemplate does really nothing fancy about connection and publishing. Everything is delegated to the KafkaProducer. What you describe here would happen exactly even if you'd use just plain Kafka Client.
See KafkaProducer.send() JavaDocs:
* #throws TimeoutException If the record could not be appended to the send buffer due to memory unavailable
* or missing metadata within {#code max.block.ms}.
Which happens by the blocking logic in that producer:
/**
* Wait for cluster metadata including partitions for the given topic to be available.
* #param topic The topic we want metadata for
* #param partition A specific partition expected to exist in metadata, or null if there's no preference
* #param nowMs The current time in ms
* #param maxWaitMs The maximum time in ms for waiting on the metadata
* #return The cluster containing topic metadata and the amount of time we waited in ms
* #throws TimeoutException if metadata could not be refreshed within {#code max.block.ms}
* #throws KafkaException for all Kafka-related exceptions, including the case where this method is called after producer close
*/
private ClusterAndWaitTime waitOnMetadata(String topic, Integer partition, long nowMs, long maxWaitMs) throws InterruptedException {
Unfortunately this is not explained in the send() JavaDocs which claims to be fully asynchronous, but apparently it is not. At least in this metadata part which has to be available before we enqueue the record for publishing.
That's what we cannot control and it is not reflected on the returned Future:
try {
clusterAndWaitTime = waitOnMetadata(record.topic(), record.partition(), nowMs, maxBlockTimeMs);
} catch (KafkaException e) {
if (metadata.isClosed())
throw new KafkaException("Producer closed while send in progress", e);
throw e;
}
See more info in Apache Kafka docs how to adjust the KafkaProducer for this matter: https://kafka.apache.org/documentation/#theproducer
Question answered inside the discussion on https://github.com/spring-projects/spring-kafka/discussions/2250# for anyone else stumbling across this thread. In short, kafkaTemplate.getProducerFactory().reset();does the trick.

How to periodically publish a message to activemq spring integration DSL

I installed Active MQ and periodically say every 10 seconds want to send a message to "my.queue"
I'm struggling to comprehend Spring Integration DSL language.
I need something like
IntegrationFlows.from(every 5 seconds)
.send(message to "my.queue")
Yes, you can do that with Spring Integration Java DSL and its IntegrationFlow abstraction. To make a periodic task you need to use this factory in the IntegrationFlows to start the flow:
/**
* Provides {#link Supplier} as source of messages to the integration flow.
* which will be triggered by a <b>provided</b>
* {#link org.springframework.integration.endpoint.SourcePollingChannelAdapter}.
* #param messageSource the {#link Supplier} to populate.
* #param endpointConfigurer the {#link Consumer} to provide more options for the
* {#link org.springframework.integration.config.SourcePollingChannelAdapterFactoryBean}.
* #param <T> the supplier type.
* #return new {#link IntegrationFlowBuilder}.
* #see Supplier
*/
public static <T> IntegrationFlowBuilder fromSupplier(Supplier<T> messageSource,
Consumer<SourcePollingChannelAdapterSpec> endpointConfigurer) {
The Supplier may return an object you'd like to send as a payload downstream. The second consumer arg can be configured with the:
.poller(p -> p.fixedDelay(1000))
This way every second a message is going to be created from the supplied payload and sent downstream.
To send a message ti Active MQ, you need to use a org.springframework.integration.jms.dsl.Jms and its method for respective channel adapter:
/**
* The factory to produce a {#link JmsOutboundChannelAdapterSpec}.
* #param connectionFactory the JMS ConnectionFactory to build on
* #return the {#link JmsOutboundChannelAdapterSpec} instance
*/
public static JmsOutboundChannelAdapterSpec.JmsOutboundChannelSpecTemplateAware outboundAdapter(
ConnectionFactory connectionFactory) {
The result of this factory has to be used in the DSL callback like:
/**
* Populate a {#link ServiceActivatingHandler} for the provided
* {#link MessageHandler} implementation.
* Can be used as Java 8 Lambda expression:
* <pre class="code">
* {#code
* .handle(m -> logger.info(m.getPayload())
* }
* </pre>
* #param messageHandler the {#link MessageHandler} to use.
* #return the current {#link BaseIntegrationFlowDefinition}.
*/
public B handle(MessageHandler messageHandler) {
All the info is present in the docs: https://docs.spring.io/spring-integration/reference/html/dsl.html#java-dsl
Something like this:
#Bean
public IntegrationFlow jmsPeriodicFlow() {
return IntegrationFlows.fromSupplier(() -> "hello",
e -> e.poller(p -> p.fixedDelay(5000)))
.handle(Jms.outboundAdapter(jmsConnectionFactory())
.destination("my.queue"))
.get();
}

how to set maximum number of connection retries with Spring AMQP

I have a scenario where my rabbit mq instance is not always available and would like to set the maximum number of times a connection retry happens, Is this possible with amqp?
Example,
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory factory = new CachingConnectionFactory();
factory.setUri("amqprl//");
factory ../ try uri connection for 4 times max then fail if still no connection
return factory;
}
Message producers will only try to create a connection when you send a message.
Message consumers (container factories) will retry indefinitely.
You can add a ConnectionListener to the connection factory and stop() the listener containers after some number of failures.
#FunctionalInterface
public interface ConnectionListener {
/**
* Called when a new connection is established.
* #param connection the connection.
*/
void onCreate(Connection connection);
/**
* Called when a connection is closed.
* #param connection the connection.
* #see #onShutDown(ShutdownSignalException)
*/
default void onClose(Connection connection) {
}
/**
* Called when a connection is force closed.
* #param signal the shut down signal.
* #since 2.0
*/
default void onShutDown(ShutdownSignalException signal) {
}
/**
* Called when a connection couldn't be established.
* #param exception the exception thrown.
* #since 2.2.17
*/
default void onFailed(Exception exception) {
}
}

How to set Durable Subscriber in DefaultMessageListenerContainer in spring?

Producer of the message is not sending message as persistent and when i am trying to consume the message through MessageListener, and any exception(runtime) occurs, it retries for specific number of times (default is 6 from AMQ side) and message get lost.
Reason is that since producer is not setting the Delivery mode as Persistent, after certain number of retry attempt, DLQ is not being created and message does not move to DLQ. Due to this , i lost the message.
My Code is like this :-
#Configuration
#PropertySource("classpath:application.properties")
public class ActiveMqJmsConfig {
#Autowired
private AbcMessageListener abcMessageListener;
public DefaultMessageListenerContainer purchaseMsgListenerforAMQ(
#Qualifier("AMQConnectionFactory") ConnectionFactory amqConFactory) {
LOG.info("Message listener for purchases from AMQ : Starting");
DefaultMessageListenerContainer defaultMessageListenerContainer =
new DefaultMessageListenerContainer();
defaultMessageListenerContainer.setConnectionFactory(amqConFactory);
defaultMessageListenerContainer.setMaxConcurrentConsumers(4);
defaultMessageListenerContainer
.setDestinationName(purchaseReceivingQueueName);
defaultMessageListenerContainer
.setMessageListener(abcMessageListener);
defaultMessageListenerContainer.setSessionTransacted(true);
return defaultMessageListenerContainer;
}
#Bean
#Qualifier(value = "AMQConnectionFactory")
public ConnectionFactory activeMQConnectionFactory() {
ActiveMQConnectionFactory amqConnectionFactory =
new ActiveMQConnectionFactory();
amqConnectionFactory
.setBrokerURL(System.getProperty(tcp://localhost:61616));
amqConnectionFactory
.setUserName(System.getProperty(admin));
amqConnectionFactory
.setPassword(System.getProperty(admin));
return amqConnectionFactory;
}
}
#Component
public class AbcMessageListener implements MessageListener {
#Override
public void onMessage(Message msg) {
//CODE implementation
}
}
Problem :- By setting the client-id at connection level (Connection.setclientid("String")), we can subscribe as durable subscriber even though message is not persistent. By doing this, if application throws runtime exception , after a certain number of retry attempt, DLQ will be created for the Queue and message be moved to DLQ.
But in DefaultMessageListenerContainer, connection is not exposed to client. it is maintained by Class itself as a pool, i guess.
How can i achieve the durable subscription in DefaultMessageListenerContainer?
You can set the client id on the container instead:
/**
* Specify the JMS client ID for a shared Connection created and used
* by this container.
* <p>Note that client IDs need to be unique among all active Connections
* of the underlying JMS provider. Furthermore, a client ID can only be
* assigned if the original ConnectionFactory hasn't already assigned one.
* #see javax.jms.Connection#setClientID
* #see #setConnectionFactory
*/
public void setClientId(#Nullable String clientId) {
this.clientId = clientId;
}
and
/**
* Set the name of a durable subscription to create. This method switches
* to pub-sub domain mode and activates subscription durability as well.
* <p>The durable subscription name needs to be unique within this client's
* JMS client id. Default is the class name of the specified message listener.
* <p>Note: Only 1 concurrent consumer (which is the default of this
* message listener container) is allowed for each durable subscription,
* except for a shared durable subscription (which requires JMS 2.0).
* #see #setPubSubDomain
* #see #setSubscriptionDurable
* #see #setSubscriptionShared
* #see #setClientId
* #see #setMessageListener
*/
public void setDurableSubscriptionName(#Nullable String durableSubscriptionName) {
this.subscriptionName = durableSubscriptionName;
this.subscriptionDurable = (durableSubscriptionName != null);
}

Resources