Transaction in Spring cloud Stream - spring

Problem:
I am trying to read a big file line by line and putting the message in a RabbitMQ.
I want to commit to rabbitMQ at the end of the file. If any record in the file is bad, then I want to revoke the messages published to the queue.
Technologies:
Spring boot,
Spring cloud stream,
RabbitMQ
Could you please help me in implementing this transition stuff.
I know how to read a file and publish to a queue using spring cloud stream.
Edit:
#Transactional
public void sendToQueue(List<Data> dataList) {
for(Data data:dataList)
{
this.output.send(MessageBuilder.withPayload(data).build());
counter++; // I can see message getting published in the queue though management plugin
}
LOGGER.debug("message sent to Q2");
}
Here is my config:
spring:
cloud:
stream:
bindings:
# Q1 input channel
tpi_q1_input:
destination: TPI_Q1
binder: local_rabbit
content-type: application/json
group: TPIService
# Q2 output channel
tpi_q2_output:
destination: TPI_Q2
binder: local_rabbit
content-type: application/json
group: TPIService
# Q2 input channel
tpi_q2_input:
destination: TPI_Q2
binder: local_rabbit
content-type: application/json
group: TPIService
binders:
local_rabbit:
type: rabbit
environment:
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
virtual-host: /
rabbit:
bindings:
tpi_q2_output:
producer:
#autoBindDlq: true
transacted: true
#batchingEnabled: true
tpi_q2_input:
consumer:
acknowledgeMode: AUTO
#autoBindDlq: true
#recoveryInterval: 5000
transacted: true
spring.cloud.stream.default-binder: local_rabbit
Java config
#EnableTransactionManagement
public class QueueConfig {
#Bean
public RabbitTransactionManager transactionManager(ConnectionFactory cf) {
return new RabbitTransactionManager(cf);
}
}
Receiver
#StreamListener(JmsQueueConstants.QUEUE_2_INPUT)
#Transactional
public void receiveMesssage(Data data) {
logger.info("Message Received in Q2:");
}

Configure the producer to use transactions ...producer.transacted=true
Publish the messages within the scope of a transaction (using the RabbitTransactionManager).
Use normal Spring transaction mechanisms for #2 (#Transacted annotation or a TransactionTemplate).
The transaction will commit if you exit normally, or roll back if you throw an exception.
EDIT
Example:
#SpringBootApplication
#EnableBinding(Source.class)
#EnableTransactionManagement
public class So50372319Application {
public static void main(String[] args) {
SpringApplication.run(So50372319Application.class, args).close();
}
#Bean
public ApplicationRunner runner(MessageChannel output, RabbitTemplate template, AmqpAdmin admin,
TransactionalSender sender) {
admin.deleteQueue("so50372319.group");
admin.declareQueue(new Queue("so50372319.group"));
admin.declareBinding(new Binding("so50372319.group", DestinationType.QUEUE, "output", "#", null));
return args -> {
sender.send("foo", "bar");
System.out.println("Received: " + new String(template.receive("so50372319.group", 10_000).getBody()));
System.out.println("Received: " + new String(template.receive("so50372319.group", 10_000).getBody()));
try {
sender.send("baz", "qux");
}
catch (RuntimeException e) {
System.out.println(e.getMessage());
}
System.out.println("Received: " + template.receive("so50372319.group", 3_000));
};
}
#Bean
public RabbitTransactionManager transactionManager(ConnectionFactory cf) {
return new RabbitTransactionManager(cf);
}
}
#Component
class TransactionalSender {
private final MessageChannel output;
public TransactionalSender(MessageChannel output) {
this.output = output;
}
#Transactional
public void send(String... data) {
for (String datum : data) {
this.output.send(new GenericMessage<>(datum));
if ("qux".equals(datum)) {
throw new RuntimeException("fail");
}
}
}
}
and
spring.cloud.stream.bindings.output.destination=output
spring.cloud.stream.rabbit.bindings.output.producer.transacted=true
and
Received: foo
Received: bar
fail
Received: null

Related

Cloud stream not able to track the status for down stream failures

I have written the following code to leverage the cloud stream functional approach to get the events from the RabbitMQ and publish those to KAFKA, I am able to achieve the primary goal with caveat while running the application if the KAFKA broker goes down due to any reason then I am getting the logs of KAFKA BROKER it's down but at the same time I want to stop the event from rabbitMQ or until the broker comes up those messages either should be routed to Exchange or DLQ topic. however, I have seen at many places to use producer sync: true but in my case that is either not helping, a lot of people talked about #ServiceActivator(inputChannel = "error-topic") for error topic while having a failure at target channel, this method is also not getting executed. so in short I don't want to lose my messages received from rabbitMQ during kafka is down due to any reason
application.yml
management:
health:
binders:
enabled: true
kafka:
enabled: true
server:
port: 8081
spring:
rabbitmq:
publisher-confirms : true
kafka:
bootstrap-servers: localhost:9092
producer:
properties:
max.block.ms: 100
admin:
fail-fast: true
cloud:
function:
definition: handle
stream:
bindingRetryInterval : 30
rabbit:
bindings:
handle-in-0:
consumer:
bindingRoutingKey: MyRoutingKey
exchangeType: topic
requeueRejected : true
acknowledgeMode: AUTO
# ackMode: MANUAL
# acknowledge-mode: MANUAL
# republishToDlq : false
kafka:
binder:
considerDownWhenAnyPartitionHasNoLeader: true
producer:
properties:
max.block.ms : 100
brokers:
- localhost
bindings:
handle-in-0:
destination: test_queue
binder: rabbit
group: queue
handle-out-0:
destination: mytopic
producer:
sync: true
errorChannelEnabled: true
binder: kafka
binders:
error:
destination: myerror
rabbit:
type: rabbit
environment:
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
virtual-host: rahul_host
kafka:
type: kafka
json:
cuttoff:
size:
limit: 1000
CloudStreamConfig.java
#Configuration
public class CloudStreamConfig {
private static final Logger log = LoggerFactory.getLogger(CloudStreamConfig.class);
#Autowired
ChunkService chunkService;
#Bean
public Function<Message<RmaValues>,Collection<Message<RmaValues>>> handle() {
return rmaValue -> {
log.info("processor runs : message received with request id : {}", rmaValue.getPayload().getRequestId());
ArrayList<Message<RmaValues>> msgList = new ArrayList<Message<RmaValues>>();
try {
List<RmaValues> dividedJson = chunkService.getDividedJson(rmaValue.getPayload());
for(RmaValues rmaValues : dividedJson) {
msgList.add(MessageBuilder.withPayload(rmaValues).build());
}
} catch (Exception e) {
e.printStackTrace();
}
Channel channel = rmaValue.getHeaders().get(AmqpHeaders.CHANNEL, Channel.class);
Long deliveryTag = rmaValue.getHeaders().get(AmqpHeaders.DELIVERY_TAG, Long.class);
// try {
// channel.basicAck(deliveryTag, false);
// } catch (IOException e) {
// e.printStackTrace();
// }
return msgList;
};
};
#ServiceActivator(inputChannel = "error-topic")
public void errorHandler(ErrorMessage em) {
log.info("---------------------------------------got error message over errorChannel: {}", em);
if (null != em.getPayload() && em.getPayload() instanceof KafkaSendFailureException) {
KafkaSendFailureException kafkaSendFailureException = (KafkaSendFailureException) em.getPayload();
if (kafkaSendFailureException.getRecord() != null && kafkaSendFailureException.getRecord().value() != null
&& kafkaSendFailureException.getRecord().value() instanceof byte[]) {
log.warn("error channel message. Payload {}", new String((byte[])(kafkaSendFailureException.getRecord().value())));
}
}
}
KafkaProducerConfiguration.java
#Configuration
public class KafkaProducerConfiguration {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Bean
public ProducerFactory<String, Object> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapAddress);
configProps.put(
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
configProps.put(
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate(producerFactory());
}
RmModelOutputIngestionApplication.java
#SpringBootApplication(scanBasePackages = "com.abb.rm")
public class RmModelOutputIngestionApplication {
private static final Logger LOGGER = LogManager.getLogger(RmModelOutputIngestionApplication.class);
public static void main(String[] args) {
SpringApplication.run(RmModelOutputIngestionApplication.class, args);
}
#Bean("objectMapper")
public ObjectMapper objectMapper() {
ObjectMapper mapper = new ObjectMapper();
LOGGER.info("Returning object mapper...");
return mapper;
}
First, it seems like you are creating too much unnecessary code. Why do you have ObjectMapper? Why do you have KafkaTemplate? Why do you have ProducerFactory? These are all already provided for you.
You really only have to have one function and possibly an error handler - depending on error handling strategy you select, which brings me to the error handling topic. There are 3 primary ways of handling errors. Here is the link to the doc explaining them all and providing samples. Please read thru that and modify your app accordingly and if something doesn't work or unclear feel free to follow up.

How to intercept message republished to DLQ in Spring Cloud RabbitMQ?

I want to intercept messages that are republished to DLQ after retry limit is exhausted, and my ultimate goal is to eliminate x-exception-stacktrace header from those messages.
Config:
spring:
application:
name: sandbox
cloud:
function:
definition: rabbitTest1Input
stream:
binders:
rabbitTestBinder1:
type: rabbit
environment:
spring:
rabbitmq:
addresses: localhost:55015
username: guest
password: guest
virtual-host: test
bindings:
rabbitTest1Input-in-0:
binder: rabbitTestBinder1
consumer:
max-attempts: 3
destination: ex1
group: q1
rabbit:
bindings:
rabbitTest1Input-in-0:
consumer:
autoBindDlq: true
bind-queue: true
binding-routing-key: q1key
deadLetterExchange: ex1-DLX
dlqDeadLetterExchange: ex1
dlqDeadLetterRoutingKey: q1key_dlq
dlqTtl: 180000
prefetch: 5
queue-name-group-only: true
republishToDlq: true
requeueRejected: false
ttl: 86400000
#Configuration
class ConsumerConfig {
companion object : KLogging()
#Bean
fun rabbitTest1Input(): Consumer<Message<String>> {
return Consumer {
logger.info("Received from test1 queue: ${it.payload}")
throw AmqpRejectAndDontRequeueException("FAILED") // force republishing to DLQ after N retries
}
}
}
First I tried to register #GlobalChannelInterceptor (like here), but since RabbitMessageChannelBinder uses its own private RabbitTemplate instance (not autowired) for republishing (see #getErrorMessageHandler) it doesn't get intercepted.
Then I tried to extend RabbitMessageChannelBinder class by throwing away the code related to x-exception-stacktrace and then declare this extension as a bean:
/**
* Forked from {#link org.springframework.cloud.stream.binder.rabbit.RabbitMessageChannelBinder} with the goal
* to eliminate {#link RepublishMessageRecoverer.X_EXCEPTION_STACKTRACE} header from messages republished to DLQ
*/
class RabbitMessageChannelBinderWithNoStacktraceRepublished
: RabbitMessageChannelBinder(...)
// and then
#Configuration
#Import(
RabbitAutoConfiguration::class,
RabbitServiceAutoConfiguration::class,
RabbitMessageChannelBinderConfiguration::class,
PropertyPlaceholderAutoConfiguration::class,
)
#EnableConfigurationProperties(
RabbitProperties::class,
RabbitBinderConfigurationProperties::class,
RabbitExtendedBindingProperties::class
)
class RabbitConfig {
#Bean
#Primary
#Role(BeanDefinition.ROLE_INFRASTRUCTURE)
#Order(Ordered.HIGHEST_PRECEDENCE)
fun customRabbitMessageChannelBinder(
appCtx: ConfigurableApplicationContext,
... // required injections
): RabbitMessageChannelBinder {
// remove the original (auto-configured) bean. Explanation is after the code snippet
val registry = appCtx.autowireCapableBeanFactory as BeanDefinitionRegistry
registry.removeBeanDefinition("rabbitMessageChannelBinder")
// ... and replace it with custom binder. It's initialized absolutely the same way as original bean, but is of forked class
return RabbitMessageChannelBinderWithNoStacktraceRepublished(...)
}
}
But in this case my channel binder doesn't respect the YAML properties (e.g. addresses: localhost:55015) and uses default values (e.g. localhost:5672)
INFO o.s.a.r.c.CachingConnectionFactory - Attempting to connect to: [localhost:5672]
INFO o.s.a.r.l.SimpleMessageListenerContainer - Broker not available; cannot force queue declarations during start: java.net.ConnectException: Connection refused
On the other hand if I don't remove original binder from Spring context I get following error:
Caused by: java.lang.IllegalStateException: Multiple binders are available, however neither default nor per-destination binder name is provided. Available binders are [rabbitMessageChannelBinder, customRabbitMessageChannelBinder]
at org.springframework.cloud.stream.binder.DefaultBinderFactory.getBinder(DefaultBinderFactory.java:145)
Could anyone give me a hint how to solve this problem?
P.S. I use Spring Cloud Stream 3.1.6 and Spring Boot 2.6.6
Disable the binder retry/DLQ configuration (maxAttempts=1, republishToDlq=false, and other dlq related properties).
Add a ListenerContainerCustomizer to add a custom retry advice to the advice chain, with a customized dead letter publishing recoverer.
Manually provision the DLQ using a Queue #Bean.
#SpringBootApplication
public class So72871662Application {
public static void main(String[] args) {
SpringApplication.run(So72871662Application.class, args);
}
#Bean
public Consumer<String> input() {
return str -> {
System.out.println();
throw new RuntimeException("test");
};
}
#Bean
ListenerContainerCustomizer<MessageListenerContainer> customizer(RetryOperationsInterceptor retry) {
return (cont, dest, grp) -> {
((AbstractMessageListenerContainer) cont).setAdviceChain(retry);
};
}
#Bean
RetryOperationsInterceptor interceptor(MessageRecoverer recoverer) {
return RetryInterceptorBuilder.stateless()
.maxAttempts(3)
.backOffOptions(3_000L, 2.0, 10_000L)
.recoverer(recoverer)
.build();
}
#Bean
MessageRecoverer recoverer(RabbitTemplate template) {
return new RepublishMessageRecoverer(template, "DLX", "errors") {
#Override
protected void doSend(#Nullable
String exchange, String routingKey, Message message) {
message.getMessageProperties().getHeaders().remove(RepublishMessageRecoverer.X_EXCEPTION_STACKTRACE);
super.doSend(exchange, routingKey, message);
}
};
}
#Bean
FanoutExchange dlx() {
return new FanoutExchange("DLX");
}
#Bean
Queue dlq() {
return new Queue("errors");
}
#Bean
Binding dlqb() {
return BindingBuilder.bind(dlq()).to(dlx());
}
}

Spring Cloud Stream - Testing Functional Producer

I wrote a Spring Cloud Stream Producer according new functional model introduced with 3.1 version.
#EnableAutoConfiguration
#Component
public class Producer {
private final BlockingQueue<Message<Object>> messageQueue = new LinkedBlockingQueue<>();
public void produce(int messageId, Object message) {
Message<Object> toProduce = MessageBuilder
.withPayload(message)
.setHeader(PARTITION_KEY, messageId)
.build();
messageQueue.offer(toProduce);
}
#Bean
public Supplier<Message<Object>> produceMessage() {
return () -> messageQueue.poll();
}
}
I'm able to call from a REST controller the produce(int, Object) method that put data into the BlockingQueue.
The Supplier, annotated with #Bean, annotation is polled by default every second.
This is a snippet of the application.yml:
spring:
cloud:
function:
definition: produceMessage
stream:
bindings:
produceMessage-out-0:
destination: test-topic
contentType: application/json
producer:
partitionKeyExpression: headers['partitionKey']
partitionCount: 1
errorChannelEnabled: true
...
kafka:
bindings:
produceMessage-out-0:
producer:
configuration:
retries: 10
max.in.flight.requests.per.connection: 1
request.timeout.ms: 20000
Finally I wrote this class in order to test my code:
#SpringBootTest
class ProducerTest {
#Test
void producerTest() {
try (ConfigurableApplicationContext context = new SpringApplicationBuilder
(TestChannelBinderConfiguration.getCompleteConfiguration(Producer.class))
.web(WebApplicationType.NONE)
.run("--spring.jmx.enabled=false")) {
OutputDestination output = context.getBean(OutputDestination.class);
Producer producer= context.getBean(Producer.class);
producer.produce(1, new MyMessage(1, "Hello Message"));
Message<byte[]> received = output.receive();
Assertions.assertNotNull(received);
}
}
}
When I run the test, it fails because received is null.
I read a lot of examples that show this is the way to test this type of Producer.
What am I doing wrong? Can you help me please?
Thanks

Problem with Spring Cloud Stream configuration

I'm trying to upgrade the version of a legacy application. I'm trying to develop the part of amqp with spring-cloud-stream.
I can't listen in rabbitMQ queue, without exchange ( I can't change this way )
How can i implement a listener just for one queue??
This is my app-properties.yml
cloud:
function:
definition: inputCollector
stream:
default:
contentType: application/json
declareExchange: false
binders:
rabbitmq:
type: rabbit
bindings:
inputCollector-in-0:
queueNameGroupOnly : true
group : collector_result.Collections
binder: rabbitmq
and my code
#Configuration
#AllArgsConstructor
public class AnyHandler {
private static final Logger LOG = LoggerFactory.getLogger(InputCollectorHandler.class);
private final CollectorService collectorService;
#Bean
public Consumer<Event> inputCollector() {
return user -> {
LOG.info("event received", user);
try {
anyService.handleCollectorResponse(user);
} catch (Exception e) {
LOG.error("Error processing message: " + user);
}
};
}
}
declareExchange: false must be under ...rabbit.defaults... or ...rabbit.bindings.....consumer.
https://docs.spring.io/spring-cloud-stream-binder-rabbit/docs/3.1.1/reference/html/spring-cloud-stream-binder-rabbit.html#_rabbitmq_consumer_properties.

Unable to send custom header using spring cloud stream kafka

I have two microservices written in Java, using Spring Boot.
I use Kafka, through Spring Cloud Stream Kafka, to send messages between them.
I need to send a custom header, but with no success until now.
I have read and tried most of the things I have found on internet and Spring Cloud Stream documentation...
... still I have been unable to make it work.
Which means I never receive a message in the receiver because the header cannot be found and cannot be null.
I suspect the header is never written in the message. Right now I am trying to verify this with Kafkacat.
Any help will be wellcome
Thanks in advance.
------ information --------------------
Here it is the sender code:
#SendTo("notifications")
public void send(NotificationPayload payload, String eventId) {
var headerMap = Collections.singletonMap("EVENT_ID",
eventId.getBytes(StandardCharsets.UTF_8));
MessageHeaders headers = new MessageHeaders(headerMap);
var message = MessageBuilder.createMessage(payload, headers);
notifications.send(message);
}
Where notifications is a MessageChannel
Here is the related configuration for message sender.
spring:
cloud:
stream:
defaultBinder: kafka
bindings:
notifications:
binder: kafka
destination: notifications
contentType: application/x-java-object;type=com.types.NotificationPayload
producer:
partitionCount: 1
headerMode: headers
kafka:
binder:
headers: EVENT_ID
I have also tried with headers: "EVENT_ID"
Here is the code for the receiver part:
#StreamListener("notifications")
public void receiveNotif(#Header("EVENT_ID") byte[] eventId,
#Payload NotificationPayload payload) {
var eventIdS = new String((byte[]) eventId, StandardCharsets.UTF_8);
...
// do something with the payload
}
And the configuration for the receiving part:
spring:
cloud:
stream:
kafka:
bindings:
notifications:
consumer:
headerMode: headers
Versions
<spring-cloud-stream-dependencies.version>Horsham.SR4</spring-cloud-stream-dependencies.version>
<spring-cloud-stream-binder-kafka.version>3.0.4.RELEASE</spring-cloud-stream-binder-kafka.version>
<spring-cloud-schema-registry.version>1.0.4.RELEASE</spring-cloud-schema-registry.version>
<spring-cloud-stream.version>3.0.4.RELEASE</spring-cloud-stream.version>
What version are you using? Describe "can't get it to work" in more detail.
This works fine...
#SpringBootApplication
#EnableBinding(Source.class)
public class So64586916Application {
public static void main(String[] args) {
SpringApplication.run(So64586916Application.class, args);
}
#InboundChannelAdapter(channel = Source.OUTPUT)
Message<String> source() {
return MessageBuilder.withPayload("foo")
.setHeader("myHeader", "someValue")
.build();
}
#KafkaListener(id = "in", topics = "output")
void listen(Message<?> in) {
System.out.println(in);
}
}
spring.kafka.consumer.auto-offset-reset=earliest
GenericMessage [payload=byte[3], headers={myHeader=someValue, kafka_offset=0, ...
GenericMessage [payload=byte[3], headers={myHeader=someValue, kafka_offset=1, ...
EDIT
I also tested it by sending to the channel directly; again with no problems:
#Autowired
MessageChannel output;
#Bean
public ApplicationRunner runner() {
return args -> {
this.output.send(MessageBuilder.withPayload("foo")
.setHeader("myHeader", "someValue")
.build());
};
}

Resources