Declarables and MultiRabbit - spring

I'm using spring-multirabbit library :
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
multirabbitmq:
enabled: true
connections:
my-rabbitmq:
host: localhost
port: 5677
username: guest
password: guest
How can I make sure that the "Declarables" act only on a specific rabbit connection and not on all the declared rabbits connections ?
#Bean
public Declarables queues(MessagingProperties props) {
Declaradles declarables = /* build declarables...? */
return declarables;
}

Ok.. I figured out how to solve it
Specify the admin with method : 'setAdminsThatShouldDeclare' on the declarable :
#Bean
public Declarables queues(MessagingProperties messagingProperties) {
return new Declarables(messagingProperties.getBindings().stream().map(b ->
{
Declarable queue = QueueBuilder.nonDurable(b.getQueue()).build();
queue.setAdminsThatShouldDeclare("my-rabbitmq-admin");
return queue;
}).collect(Collectors.toList()));
}

Related

Cloud stream not able to track the status for down stream failures

I have written the following code to leverage the cloud stream functional approach to get the events from the RabbitMQ and publish those to KAFKA, I am able to achieve the primary goal with caveat while running the application if the KAFKA broker goes down due to any reason then I am getting the logs of KAFKA BROKER it's down but at the same time I want to stop the event from rabbitMQ or until the broker comes up those messages either should be routed to Exchange or DLQ topic. however, I have seen at many places to use producer sync: true but in my case that is either not helping, a lot of people talked about #ServiceActivator(inputChannel = "error-topic") for error topic while having a failure at target channel, this method is also not getting executed. so in short I don't want to lose my messages received from rabbitMQ during kafka is down due to any reason
application.yml
management:
health:
binders:
enabled: true
kafka:
enabled: true
server:
port: 8081
spring:
rabbitmq:
publisher-confirms : true
kafka:
bootstrap-servers: localhost:9092
producer:
properties:
max.block.ms: 100
admin:
fail-fast: true
cloud:
function:
definition: handle
stream:
bindingRetryInterval : 30
rabbit:
bindings:
handle-in-0:
consumer:
bindingRoutingKey: MyRoutingKey
exchangeType: topic
requeueRejected : true
acknowledgeMode: AUTO
# ackMode: MANUAL
# acknowledge-mode: MANUAL
# republishToDlq : false
kafka:
binder:
considerDownWhenAnyPartitionHasNoLeader: true
producer:
properties:
max.block.ms : 100
brokers:
- localhost
bindings:
handle-in-0:
destination: test_queue
binder: rabbit
group: queue
handle-out-0:
destination: mytopic
producer:
sync: true
errorChannelEnabled: true
binder: kafka
binders:
error:
destination: myerror
rabbit:
type: rabbit
environment:
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
virtual-host: rahul_host
kafka:
type: kafka
json:
cuttoff:
size:
limit: 1000
CloudStreamConfig.java
#Configuration
public class CloudStreamConfig {
private static final Logger log = LoggerFactory.getLogger(CloudStreamConfig.class);
#Autowired
ChunkService chunkService;
#Bean
public Function<Message<RmaValues>,Collection<Message<RmaValues>>> handle() {
return rmaValue -> {
log.info("processor runs : message received with request id : {}", rmaValue.getPayload().getRequestId());
ArrayList<Message<RmaValues>> msgList = new ArrayList<Message<RmaValues>>();
try {
List<RmaValues> dividedJson = chunkService.getDividedJson(rmaValue.getPayload());
for(RmaValues rmaValues : dividedJson) {
msgList.add(MessageBuilder.withPayload(rmaValues).build());
}
} catch (Exception e) {
e.printStackTrace();
}
Channel channel = rmaValue.getHeaders().get(AmqpHeaders.CHANNEL, Channel.class);
Long deliveryTag = rmaValue.getHeaders().get(AmqpHeaders.DELIVERY_TAG, Long.class);
// try {
// channel.basicAck(deliveryTag, false);
// } catch (IOException e) {
// e.printStackTrace();
// }
return msgList;
};
};
#ServiceActivator(inputChannel = "error-topic")
public void errorHandler(ErrorMessage em) {
log.info("---------------------------------------got error message over errorChannel: {}", em);
if (null != em.getPayload() && em.getPayload() instanceof KafkaSendFailureException) {
KafkaSendFailureException kafkaSendFailureException = (KafkaSendFailureException) em.getPayload();
if (kafkaSendFailureException.getRecord() != null && kafkaSendFailureException.getRecord().value() != null
&& kafkaSendFailureException.getRecord().value() instanceof byte[]) {
log.warn("error channel message. Payload {}", new String((byte[])(kafkaSendFailureException.getRecord().value())));
}
}
}
KafkaProducerConfiguration.java
#Configuration
public class KafkaProducerConfiguration {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Bean
public ProducerFactory<String, Object> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapAddress);
configProps.put(
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
configProps.put(
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate(producerFactory());
}
RmModelOutputIngestionApplication.java
#SpringBootApplication(scanBasePackages = "com.abb.rm")
public class RmModelOutputIngestionApplication {
private static final Logger LOGGER = LogManager.getLogger(RmModelOutputIngestionApplication.class);
public static void main(String[] args) {
SpringApplication.run(RmModelOutputIngestionApplication.class, args);
}
#Bean("objectMapper")
public ObjectMapper objectMapper() {
ObjectMapper mapper = new ObjectMapper();
LOGGER.info("Returning object mapper...");
return mapper;
}
First, it seems like you are creating too much unnecessary code. Why do you have ObjectMapper? Why do you have KafkaTemplate? Why do you have ProducerFactory? These are all already provided for you.
You really only have to have one function and possibly an error handler - depending on error handling strategy you select, which brings me to the error handling topic. There are 3 primary ways of handling errors. Here is the link to the doc explaining them all and providing samples. Please read thru that and modify your app accordingly and if something doesn't work or unclear feel free to follow up.

Problem with Spring Cloud Stream configuration

I'm trying to upgrade the version of a legacy application. I'm trying to develop the part of amqp with spring-cloud-stream.
I can't listen in rabbitMQ queue, without exchange ( I can't change this way )
How can i implement a listener just for one queue??
This is my app-properties.yml
cloud:
function:
definition: inputCollector
stream:
default:
contentType: application/json
declareExchange: false
binders:
rabbitmq:
type: rabbit
bindings:
inputCollector-in-0:
queueNameGroupOnly : true
group : collector_result.Collections
binder: rabbitmq
and my code
#Configuration
#AllArgsConstructor
public class AnyHandler {
private static final Logger LOG = LoggerFactory.getLogger(InputCollectorHandler.class);
private final CollectorService collectorService;
#Bean
public Consumer<Event> inputCollector() {
return user -> {
LOG.info("event received", user);
try {
anyService.handleCollectorResponse(user);
} catch (Exception e) {
LOG.error("Error processing message: " + user);
}
};
}
}
declareExchange: false must be under ...rabbit.defaults... or ...rabbit.bindings.....consumer.
https://docs.spring.io/spring-cloud-stream-binder-rabbit/docs/3.1.1/reference/html/spring-cloud-stream-binder-rabbit.html#_rabbitmq_consumer_properties.

Spring data cassandra - error while opening new channel

I have a problem with Cassandra's connection with spring-data. When Cassandra is running locally I have no problem with connecting, however when I ran my spring-boot app in k8s with external Cassandra I am stuck on WARN:
2020-07-24 10:26:32.398 WARN 6 --- [ s0-admin-0] c.d.o.d.internal.core.pool.ChannelPool : [s0|/127.0.0.1:9042] Error while opening new channel (ConnectionInitException: [s0|connecting...] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.7.2, CLIENT_ID=9679ee85-ff39-45b6-8573-62a8d827ec9e}): failed to send request (java.nio.channels.ClosedChannelException))
I don't understand why in the log I have [s0|/127.0.0.1:9042] instead of the IP of my contact points.
Spring configuration:
spring:
data:
cassandra:
keyspace-name: event_store
local-datacenter: datacenter1
contact-points: host1:9042,host2:9042
Also this WARN is not causing that spring-boot won't start however if I do query in service I have error:
{ error: "Query; CQL [com.datastax.oss.driver.internal.core.cql.DefaultSimpleStatement#9463dccc]; No node was available to execute the query; nested exception is com.datastax.oss.driver.api.core.NoNodeAvailableException: No node was available to execute the query" }
Option 1: test your yml file like this. (Have you tried with ip address?)
data:
cassandra:
keyspace-name: event_store
local-datacenter: datacenter1
port:9042
contact-points: host1,host2
username: cassandra
password: cassandra
Option 2: Create new properties on your yml and than a configuration class
cassandra:
database:
keyspace-name: event_store
contact-points: host1, host2
port: 9042
username: cassandra
password: cassandra
#Configuration
public class CassandraConfig extends AbstractCassandraConfiguration {
#Value("${cassandra.database.keyspace-name}")
private String keySpace;
#Value("${cassandra.database.contact-points}")
private String contactPoints;
#Value("${cassandra.database.port}")
private int port;
#Value("${cassandra.database.username}")
private String userName;
#Value("${cassandra.database.password}")
private String password;
#Override
protected String getKeyspaceName() {
return keySpace;
}
#Bean
public CassandraMappingContext cassandraMapping() throws ClassNotFoundException {
CassandraMappingContext context = new CassandraMappingContext();
context.setUserTypeResolver(new SimpleUserTypeResolver(cluster().getObject(), keySpace));
return context;
}
#Bean
public CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean cluster = super.cluster();
cluster.setUsername(userName);
cluster.setPassword(password);
cluster.setContactPoints(contactPoints);
cluster.setPort(port);
return cluster;
}
#Override
protected boolean getMetricsEnabled() {
return false;
}
}

RabbitMQ queue not created at runtime

I have an easy example of spring boot 1.5.22 + amqp and the problem is that
queue is not getting created dynamically, and it should.
#Component
class ReceiverComponent {
#RabbitListener(queues = 'spring-boot-queue-2')
public void receive_2(String content) {
System.out.println("[ReceiveMsg-2] receive msg: " + content);
}
#Component
class SenderComponent {
#Autowired
private AmqpAdmin amqpAdmin;
// The default implementation of this interface is RabbitTemplate, which
currently has only one implementation.
#Autowired
private AmqpTemplate amqpTemplate;
/**
* send message
*
* #param msgContent
*/
public void send_2(String msgContent) {
amqpTemplate.convertAndSend(RabbitConfig.SPRING_BOOT_EXCHANGE,
RabbitConfig.SPRING_BOOT_BIND_KEY, msgContent);
}
#Configuration
class RabbitConfig {
// Queue name
public final static String SPRING_BOOT_QUEUE = "spring-boot-queue-2";
// Switch name
public final static String SPRING_BOOT_EXCHANGE = "spring-boot-exchange-
2";
// Bound values
public static final String SPRING_BOOT_BIND_KEY = "spring-boot-bind-key-
2";
}
The error i'm getting is :
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method(reply-code=404, reply-text=NOT_FOUND - no queue 'spring-boot-queue-2' in vhost '/', class-id=50, method-id=10)
Does it has to do something with right on the rabbitmq ?
The version installed is 3.7.13 and my coonection data is :
spring:
# Configure rabbitMQspring:
rabbitmq:
host: 127.0.0.1
port: 5672
username: guest
password: guest
Can you put:
#Bean
public Queue queue() {
return new Queue("spring-boot-queue-2'");
}
in your class annotated with #Configuration?

Transaction in Spring cloud Stream

Problem:
I am trying to read a big file line by line and putting the message in a RabbitMQ.
I want to commit to rabbitMQ at the end of the file. If any record in the file is bad, then I want to revoke the messages published to the queue.
Technologies:
Spring boot,
Spring cloud stream,
RabbitMQ
Could you please help me in implementing this transition stuff.
I know how to read a file and publish to a queue using spring cloud stream.
Edit:
#Transactional
public void sendToQueue(List<Data> dataList) {
for(Data data:dataList)
{
this.output.send(MessageBuilder.withPayload(data).build());
counter++; // I can see message getting published in the queue though management plugin
}
LOGGER.debug("message sent to Q2");
}
Here is my config:
spring:
cloud:
stream:
bindings:
# Q1 input channel
tpi_q1_input:
destination: TPI_Q1
binder: local_rabbit
content-type: application/json
group: TPIService
# Q2 output channel
tpi_q2_output:
destination: TPI_Q2
binder: local_rabbit
content-type: application/json
group: TPIService
# Q2 input channel
tpi_q2_input:
destination: TPI_Q2
binder: local_rabbit
content-type: application/json
group: TPIService
binders:
local_rabbit:
type: rabbit
environment:
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
virtual-host: /
rabbit:
bindings:
tpi_q2_output:
producer:
#autoBindDlq: true
transacted: true
#batchingEnabled: true
tpi_q2_input:
consumer:
acknowledgeMode: AUTO
#autoBindDlq: true
#recoveryInterval: 5000
transacted: true
spring.cloud.stream.default-binder: local_rabbit
Java config
#EnableTransactionManagement
public class QueueConfig {
#Bean
public RabbitTransactionManager transactionManager(ConnectionFactory cf) {
return new RabbitTransactionManager(cf);
}
}
Receiver
#StreamListener(JmsQueueConstants.QUEUE_2_INPUT)
#Transactional
public void receiveMesssage(Data data) {
logger.info("Message Received in Q2:");
}
Configure the producer to use transactions ...producer.transacted=true
Publish the messages within the scope of a transaction (using the RabbitTransactionManager).
Use normal Spring transaction mechanisms for #2 (#Transacted annotation or a TransactionTemplate).
The transaction will commit if you exit normally, or roll back if you throw an exception.
EDIT
Example:
#SpringBootApplication
#EnableBinding(Source.class)
#EnableTransactionManagement
public class So50372319Application {
public static void main(String[] args) {
SpringApplication.run(So50372319Application.class, args).close();
}
#Bean
public ApplicationRunner runner(MessageChannel output, RabbitTemplate template, AmqpAdmin admin,
TransactionalSender sender) {
admin.deleteQueue("so50372319.group");
admin.declareQueue(new Queue("so50372319.group"));
admin.declareBinding(new Binding("so50372319.group", DestinationType.QUEUE, "output", "#", null));
return args -> {
sender.send("foo", "bar");
System.out.println("Received: " + new String(template.receive("so50372319.group", 10_000).getBody()));
System.out.println("Received: " + new String(template.receive("so50372319.group", 10_000).getBody()));
try {
sender.send("baz", "qux");
}
catch (RuntimeException e) {
System.out.println(e.getMessage());
}
System.out.println("Received: " + template.receive("so50372319.group", 3_000));
};
}
#Bean
public RabbitTransactionManager transactionManager(ConnectionFactory cf) {
return new RabbitTransactionManager(cf);
}
}
#Component
class TransactionalSender {
private final MessageChannel output;
public TransactionalSender(MessageChannel output) {
this.output = output;
}
#Transactional
public void send(String... data) {
for (String datum : data) {
this.output.send(new GenericMessage<>(datum));
if ("qux".equals(datum)) {
throw new RuntimeException("fail");
}
}
}
}
and
spring.cloud.stream.bindings.output.destination=output
spring.cloud.stream.rabbit.bindings.output.producer.transacted=true
and
Received: foo
Received: bar
fail
Received: null

Resources