RabbitMQ queue not created at runtime - spring

I have an easy example of spring boot 1.5.22 + amqp and the problem is that
queue is not getting created dynamically, and it should.
#Component
class ReceiverComponent {
#RabbitListener(queues = 'spring-boot-queue-2')
public void receive_2(String content) {
System.out.println("[ReceiveMsg-2] receive msg: " + content);
}
#Component
class SenderComponent {
#Autowired
private AmqpAdmin amqpAdmin;
// The default implementation of this interface is RabbitTemplate, which
currently has only one implementation.
#Autowired
private AmqpTemplate amqpTemplate;
/**
* send message
*
* #param msgContent
*/
public void send_2(String msgContent) {
amqpTemplate.convertAndSend(RabbitConfig.SPRING_BOOT_EXCHANGE,
RabbitConfig.SPRING_BOOT_BIND_KEY, msgContent);
}
#Configuration
class RabbitConfig {
// Queue name
public final static String SPRING_BOOT_QUEUE = "spring-boot-queue-2";
// Switch name
public final static String SPRING_BOOT_EXCHANGE = "spring-boot-exchange-
2";
// Bound values
public static final String SPRING_BOOT_BIND_KEY = "spring-boot-bind-key-
2";
}
The error i'm getting is :
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method(reply-code=404, reply-text=NOT_FOUND - no queue 'spring-boot-queue-2' in vhost '/', class-id=50, method-id=10)
Does it has to do something with right on the rabbitmq ?
The version installed is 3.7.13 and my coonection data is :
spring:
# Configure rabbitMQspring:
rabbitmq:
host: 127.0.0.1
port: 5672
username: guest
password: guest

Can you put:
#Bean
public Queue queue() {
return new Queue("spring-boot-queue-2'");
}
in your class annotated with #Configuration?

Related

How to intercept message republished to DLQ in Spring Cloud RabbitMQ?

I want to intercept messages that are republished to DLQ after retry limit is exhausted, and my ultimate goal is to eliminate x-exception-stacktrace header from those messages.
Config:
spring:
application:
name: sandbox
cloud:
function:
definition: rabbitTest1Input
stream:
binders:
rabbitTestBinder1:
type: rabbit
environment:
spring:
rabbitmq:
addresses: localhost:55015
username: guest
password: guest
virtual-host: test
bindings:
rabbitTest1Input-in-0:
binder: rabbitTestBinder1
consumer:
max-attempts: 3
destination: ex1
group: q1
rabbit:
bindings:
rabbitTest1Input-in-0:
consumer:
autoBindDlq: true
bind-queue: true
binding-routing-key: q1key
deadLetterExchange: ex1-DLX
dlqDeadLetterExchange: ex1
dlqDeadLetterRoutingKey: q1key_dlq
dlqTtl: 180000
prefetch: 5
queue-name-group-only: true
republishToDlq: true
requeueRejected: false
ttl: 86400000
#Configuration
class ConsumerConfig {
companion object : KLogging()
#Bean
fun rabbitTest1Input(): Consumer<Message<String>> {
return Consumer {
logger.info("Received from test1 queue: ${it.payload}")
throw AmqpRejectAndDontRequeueException("FAILED") // force republishing to DLQ after N retries
}
}
}
First I tried to register #GlobalChannelInterceptor (like here), but since RabbitMessageChannelBinder uses its own private RabbitTemplate instance (not autowired) for republishing (see #getErrorMessageHandler) it doesn't get intercepted.
Then I tried to extend RabbitMessageChannelBinder class by throwing away the code related to x-exception-stacktrace and then declare this extension as a bean:
/**
* Forked from {#link org.springframework.cloud.stream.binder.rabbit.RabbitMessageChannelBinder} with the goal
* to eliminate {#link RepublishMessageRecoverer.X_EXCEPTION_STACKTRACE} header from messages republished to DLQ
*/
class RabbitMessageChannelBinderWithNoStacktraceRepublished
: RabbitMessageChannelBinder(...)
// and then
#Configuration
#Import(
RabbitAutoConfiguration::class,
RabbitServiceAutoConfiguration::class,
RabbitMessageChannelBinderConfiguration::class,
PropertyPlaceholderAutoConfiguration::class,
)
#EnableConfigurationProperties(
RabbitProperties::class,
RabbitBinderConfigurationProperties::class,
RabbitExtendedBindingProperties::class
)
class RabbitConfig {
#Bean
#Primary
#Role(BeanDefinition.ROLE_INFRASTRUCTURE)
#Order(Ordered.HIGHEST_PRECEDENCE)
fun customRabbitMessageChannelBinder(
appCtx: ConfigurableApplicationContext,
... // required injections
): RabbitMessageChannelBinder {
// remove the original (auto-configured) bean. Explanation is after the code snippet
val registry = appCtx.autowireCapableBeanFactory as BeanDefinitionRegistry
registry.removeBeanDefinition("rabbitMessageChannelBinder")
// ... and replace it with custom binder. It's initialized absolutely the same way as original bean, but is of forked class
return RabbitMessageChannelBinderWithNoStacktraceRepublished(...)
}
}
But in this case my channel binder doesn't respect the YAML properties (e.g. addresses: localhost:55015) and uses default values (e.g. localhost:5672)
INFO o.s.a.r.c.CachingConnectionFactory - Attempting to connect to: [localhost:5672]
INFO o.s.a.r.l.SimpleMessageListenerContainer - Broker not available; cannot force queue declarations during start: java.net.ConnectException: Connection refused
On the other hand if I don't remove original binder from Spring context I get following error:
Caused by: java.lang.IllegalStateException: Multiple binders are available, however neither default nor per-destination binder name is provided. Available binders are [rabbitMessageChannelBinder, customRabbitMessageChannelBinder]
at org.springframework.cloud.stream.binder.DefaultBinderFactory.getBinder(DefaultBinderFactory.java:145)
Could anyone give me a hint how to solve this problem?
P.S. I use Spring Cloud Stream 3.1.6 and Spring Boot 2.6.6
Disable the binder retry/DLQ configuration (maxAttempts=1, republishToDlq=false, and other dlq related properties).
Add a ListenerContainerCustomizer to add a custom retry advice to the advice chain, with a customized dead letter publishing recoverer.
Manually provision the DLQ using a Queue #Bean.
#SpringBootApplication
public class So72871662Application {
public static void main(String[] args) {
SpringApplication.run(So72871662Application.class, args);
}
#Bean
public Consumer<String> input() {
return str -> {
System.out.println();
throw new RuntimeException("test");
};
}
#Bean
ListenerContainerCustomizer<MessageListenerContainer> customizer(RetryOperationsInterceptor retry) {
return (cont, dest, grp) -> {
((AbstractMessageListenerContainer) cont).setAdviceChain(retry);
};
}
#Bean
RetryOperationsInterceptor interceptor(MessageRecoverer recoverer) {
return RetryInterceptorBuilder.stateless()
.maxAttempts(3)
.backOffOptions(3_000L, 2.0, 10_000L)
.recoverer(recoverer)
.build();
}
#Bean
MessageRecoverer recoverer(RabbitTemplate template) {
return new RepublishMessageRecoverer(template, "DLX", "errors") {
#Override
protected void doSend(#Nullable
String exchange, String routingKey, Message message) {
message.getMessageProperties().getHeaders().remove(RepublishMessageRecoverer.X_EXCEPTION_STACKTRACE);
super.doSend(exchange, routingKey, message);
}
};
}
#Bean
FanoutExchange dlx() {
return new FanoutExchange("DLX");
}
#Bean
Queue dlq() {
return new Queue("errors");
}
#Bean
Binding dlqb() {
return BindingBuilder.bind(dlq()).to(dlx());
}
}

Netty how to test Handler which uses Remote Address of a client

I have a Netty TCP Server with Spring Boot 2.3.1 with the following handler :
#Slf4j
#Component
#RequiredArgsConstructor
#ChannelHandler.Sharable
public class QrReaderProcessingHandler extends ChannelInboundHandlerAdapter {
private final CarParkPermissionService permissionService;
private final Gson gson = new Gson();
private String remoteAddress;
#Override
public void channelActive(ChannelHandlerContext ctx) {
ctx.fireChannelActive();
remoteAddress = ctx.channel().remoteAddress().toString();
if (log.isDebugEnabled()) {
log.debug(remoteAddress);
}
ctx.writeAndFlush("Your remote address is " + remoteAddress + ".\r\n");
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
log.info("CLIENT_IP: {}", remoteAddress);
String stringMsg = (String) msg;
log.info("CLIENT_REQUEST: {}", stringMsg);
String lowerCaseMsg = stringMsg.toLowerCase();
if (RequestType.HEARTBEAT.containsName(lowerCaseMsg)) {
HeartbeatRequest heartbeatRequest = gson.fromJson(stringMsg, HeartbeatRequest.class);
log.debug("heartbeat request: {}", heartbeatRequest);
HeartbeatResponse response = HeartbeatResponse.builder()
.responseCode("ok")
.build();
ctx.writeAndFlush(response + "\n\r");
}
}
Request DTO:
#Data
#Builder
#NoArgsConstructor
#AllArgsConstructor
public class HeartbeatRequest {
private String messageID;
}
Response DTO:
#Data
#Builder
#NoArgsConstructor
#AllArgsConstructor
public class HeartbeatResponse {
private String responseCode;
}
Logic is quite simple. Only I have to know the IP address of the client.
I need to test it as well.
I have been looking for many resources for testing handlers for Netty, like
Testing Netty with EmbeddedChannel
How to unit test netty handler
However, it didn't work for me.
For EmbeddedChannel I have following error - Your remote address is embedded.
Here is code:
#ActiveProfiles("test")
#RunWith(MockitoJUnitRunner.class)
public class ProcessingHandlerTest_Embedded {
#Mock
private PermissionService permissionService;
private EmbeddedChannel embeddedChannel;
private final Gson gson = new Gson();
private ProcessingHandler processingHandler;
#Before
public void setUp() {
processingHandler = new ProcessingHandler(permissionService);
embeddedChannel = new EmbeddedChannel(processingHandler);
}
#Test
public void testHeartbeatMessage() {
// given
HeartbeatRequest heartbeatMessage = HeartbeatRequest.builder()
.messageID("heartbeat")
.build();
HeartbeatResponse response = HeartbeatResponse.builder()
.responseCode("ok")
.build();
String request = gson.toJson(heartbeatMessage).concat("\r\n");
String expected = gson.toJson(response).concat("\r\n");
// when
embeddedChannel.writeInbound(request);
// then
Queue<Object> outboundMessages = embeddedChannel.outboundMessages();
assertEquals(expected, outboundMessages.poll());
}
}
Output:
22:21:29.062 [main] INFO handler.ProcessingHandler - CLIENT_IP: embedded
22:21:29.062 [main] INFO handler.ProcessingHandler - CLIENT_REQUEST: {"messageID":"heartbeat"}
22:21:29.067 [main] DEBUG handler.ProcessingHandler - heartbeat request: HeartbeatRequest(messageID=heartbeat)
org.junit.ComparisonFailure:
<Click to see difference>
However, I don't know how to do exact testing for such a case.
Here is a snippet from configuration:
#Bean
#SneakyThrows
public InetSocketAddress tcpSocketAddress() {
// for now, hostname is: localhost/127.0.0.1:9090
return new InetSocketAddress("localhost", nettyProperties.getTcpPort());
// for real client devices: A05264/172.28.1.162:9090
// return new InetSocketAddress(InetAddress.getLocalHost(), nettyProperties.getTcpPort());
}
#Component
#RequiredArgsConstructor
public class QrReaderChannelInitializer extends ChannelInitializer<SocketChannel> {
private final StringEncoder stringEncoder = new StringEncoder();
private final StringDecoder stringDecoder = new StringDecoder();
private final QrReaderProcessingHandler readerServerHandler;
private final NettyProperties nettyProperties;
#Override
protected void initChannel(SocketChannel socketChannel) {
ChannelPipeline pipeline = socketChannel.pipeline();
// Add the text line codec combination first
pipeline.addLast(new DelimiterBasedFrameDecoder(1024 * 1024, Delimiters.lineDelimiter()));
pipeline.addLast(new ReadTimeoutHandler(nettyProperties.getClientTimeout()));
pipeline.addLast(stringDecoder);
pipeline.addLast(stringEncoder);
pipeline.addLast(readerServerHandler);
}
}
How to test handler with IP address of a client?
Two things that could help:
Do not annotate with #ChannelHandler.Sharable if your handler is NOT sharable. This can be misleading. Remove unnecessary state from handlers. In your case you should remove the remoteAddress member variable and ensure that Gson and CarParkPermissionService can be reused and are thread-safe.
"Your remote address is embedded" is NOT an error. It actually is the message written by your handler onto the outbound channel (cf. your channelActive() method)
So it looks like it could work.
EDIT
Following your comments here are some clarifications regarding the second point. I mean that:
your code making use of EmbeddedChannel is almost correct. There is just a misunderstanding on the expected results (assert).
To make the unit test successful, you just have either:
to comment this line in channelActive(): ctx.writeAndFlush("Your remote ...")
or to poll the second message from Queue<Object> outboundMessages in testHeartbeatMessage()
Indeed, when you do this:
// when
embeddedChannel.writeInbound(request);
(1) You actually open the channel once, which fires a channelActive() event. You don't have a log in it but we see that the variable remoteAddress is not null afterwards, meaning that it was assigned in the channelActive() method.
(2) At the end of the channelActive() method, you eventually already send back a message by writing on the channel pipeline, as seen at this line:
ctx.writeAndFlush("Your remote address is " + remoteAddress + ".\r\n");
// In fact, this is the message you see in your failed assertion.
(3) Then the message written by embeddedChannel.writeInbound(request) is received and can be read, which fires a channelRead() event. This time, we see this in your log output:
22:21:29.062 [main] INFO handler.ProcessingHandler - CLIENT_IP: embedded
22:21:29.062 [main] INFO handler.ProcessingHandler - CLIENT_REQUEST: {"messageID":"heartbeat"}
22:21:29.067 [main] DEBUG handler.ProcessingHandler - heartbeat request: HeartbeatRequest(messageID=heartbeat)
(4) At the end of channelRead(ChannelHandlerContext ctx, Object msg), you will then send a second message (the expected one):
HeartbeatResponse response = HeartbeatResponse.builder()
.responseCode("ok")
.build();
ctx.writeAndFlush(response + "\n\r");
Therefore, with the following code of your unit test...
Queue<Object> outboundMessages = embeddedChannel.outboundMessages();
assertEquals(expected, outboundMessages.poll());
... you should be able to poll() two messages:
"Your remote address is embedded"
"{ResponseCode":"ok"}
Does it make sense for you?

Spring boot rabbitmq no exchange '"xxxxxxx"' in vhost '/'

I'm writing a simple rabbitmq producer with spring boot 2.2.7.
On the broker side I've setup a direct exchange samples , a queue named samples.default and binded them together adding a samples.default bindkey key.
when running the application I get the following error
Attempting to connect to: [127.0.0.1:5672]
2020-05-14 15:13:39.232 INFO 28393 --- [nio-8080-exec-1] o.s.a.r.c.CachingConnectionFactory : Created new connection: rabbitConnectionFactory#2f860823:0/SimpleConnection#3946e760 [delegate=amqp://open-si#127.0.0.1:5672/, localPort= 34710]
2020-05-14 15:13:39.267 ERROR 28393 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange '"samples"' in vhost '/', class-id=60, method-id=40)
The rabbitmq server configuration is correct as I've a python producer that already puts messages succesfully in the "samples.default" queue.
in Spring boot I'm using jackson serialization, but that's not the prolem here I think as I've tested the code without the Jakson serialization configuration and the problem is still the same.
My broker configuration is set both in the application.properties :
#spring.rabbitmq.host=localhost
spring.rabbitmq.addresses=127.0.0.1
spring.rabbitmq.port=5672
spring.rabbitmq.username=xxxx
spring.rabbitmq.password=xxxx
broker.exchange = "samples"
broker.routingKey = "samples.default"
note that using spring.rabbitmq.host doesn't work as it results in using my internet provider address !
and in a BrokerConf configuration class :
#Configuration
public class BrokerConf {
#Bean("publisher")
MessagePublisher<BaseSample> baseSamplePublisher(RabbitTemplate rabbitTemplate) {
return new MessagePublisher<BaseSample>(rabbitTemplate);
}
#Bean
public RabbitTemplate rabbitTemplate(final ConnectionFactory connectionFactory) {
final var rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
return rabbitTemplate;
}
#Bean
public MessageConverter producerJackson2MessageConverter() {
return new Jackson2JsonMessageConverter();
}
}
The publisher base class is as :
#Component
public class MessagePublisher<T> {
private static final Logger log = LoggerFactory.getLogger(MessagePublisher.class);
private final RabbitTemplate rabbitTemplate;
public MessagePublisher(RabbitTemplate r) {
rabbitTemplate = r;
}
public void publish(List<BaseSample> messages, String exchange, String routingKey) {
for (BaseSample message: messages) {
rabbitTemplate.convertAndSend(exchange, routingKey, message);
}
}
}
that I use in a rest controller
private static final Logger logger = LoggerFactory.getLogger(SamplesController.class);
#Autowired
private MessagePublisher<BaseSample> publisher;
#Value("${broker.exchange}")
private String exchange;
#Value("${broker.routingKey}")
private String routingKey;
#PutMapping(value = "/new", produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<SampleAck> add(#RequestBody List<BaseSample> samples) {
publisher.publish(samples, exchange, routingKey);
return ResponseEntity.ok(new SampleAck(samples.size(), new Date()));
}
So the broker connection is OK but the exchange is not found
and rabbitmq resources exists
xxxxxx#xxxxxxx:~/factory/udc-collector$ sudo rabbitmqctl list_exchanges
Listing exchanges for vhost / ...
name type
amq.topic topic
amq.rabbitmq.trace topic
amq.match headers
amq.direct direct
amq.fanout fanout
direct
amq.rabbitmq.log topic
amq.headers headers
samples direct
xxxx#xxxxx:~/factory/udc-collector$ sudo rabbitmqctl list_queues
Timeout: 60.0 seconds ...
Listing queues for vhost / ...
name messages
samples.default 2
Any idea ?
thanks in advance.
The error seems quite obvious:
no exchange '"samples"' in vhost
broker.exchange = "samples"
broker.routingKey = "samples.default"
Remove the quotes
broker.exchange=samples
broker.routingKey=samples.default

rabbitmq binding not work with spring-boot

with spring boot 1.5.9 RELEASE, code as below
#Configuration
#EnableRabbit
public class RabbitmqConfig {
#Autowired
ConnectionFactory connectionFactory;
#Bean//with or without this bean, neither works
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(connectionFactory);
}
#Bean
public Queue bbbQueue() {
return new Queue("bbb");
}
#Bean
public TopicExchange requestExchange() {
return new TopicExchange("request");
}
#Bean
public Binding bbbBinding() {
return BindingBuilder.bind(bbbQueue())
.to(requestExchange())
.with("*");
}
}
After the jar stars, there is no error message and there is no topic exchange showing in RabbitMQ managementUI(15672) exchanges page.
However, with python code, topic exchange shows and the binding can be seen on exchange detaile page. python code as below
connection = pika.BlockingConnection(pika.ConnectionParameters(host='10.189.134.47'))
channel = connection.channel()
channel.exchange_declare(exchange='request', exchange_type='topic', durable=True)
result = channel.queue_declare(queue='aaa', durable=True)
queue_name = result.method.queue
channel.queue_bind(exchange='aaa', routing_key='*',
queue=queue_name)
print(' [*] Waiting for logs. To exit press CTRL+C')
def callback(ch, method, properties, body):
print(" [x] %r" % body)
channel.basic_consume(callback, queue=queue_name, no_ack=True)
channel.start_consuming()
I just copied your code and it works fine.
NOTE The queue/binding won't be declared until a connection is opened, such as by a listener container that reads from the queue (or sending a message with a RabbitTemplate).
#RabbitListener(queues = "bbb")
public void listen(String in) {
System.out.println(in);
}
The container must have autoStartup=true (default).

can we batch up groups of 10 message load in mosquitto using spring integration

this is how i have defined my mqtt connection using spring integration.i am not sure whether this is possible bt can we setup a mqtt subscriber works after getting a 10 load of messages. right now subscriber works after publishing a message as it should.
#Autowired
ConnectorConfig config;
#Bean
public MqttPahoClientFactory mqttClientFactory() {
DefaultMqttPahoClientFactory factory = new DefaultMqttPahoClientFactory();
factory.setServerURIs(config.getUrl());
factory.setUserName(config.getUser());
factory.setPassword(config.getPass());
return factory;
}
#Bean
public MessageProducer inbound() {
MqttPahoMessageDrivenChannelAdapter adapter =
new MqttPahoMessageDrivenChannelAdapter(config.getClientid(), mqttClientFactory(), "ALERT", "READING");
adapter.setCompletionTimeout(5000);
adapter.setConverter(new DefaultPahoMessageConverter());
adapter.setQos(1);
adapter.setOutputChannel(mqttRouterChannel());
return adapter;
}
/**this is router**/
#MessageEndpoint
public class MessageRouter {
private final Logger logger = LoggerFactory.getLogger(MessageRouter.class);
static final String ALERT = "ALERT";
static final String READING = "READING";
#Router(inputChannel = "mqttRouterChannel")
public String route(#Header("mqtt_topic") String topic){
String route = null;
switch (topic){
case ALERT:
logger.info("alert message received");
route = "alertTransformerChannel";
break;
case READING:
logger.info("reading message received");
route = "readingTransformerChannel";
break;
}
return route;
}
}
i need to batch up groups of 10 messages at a time
That is not a MqttPahoMessageDrivenChannelAdapter responsibility.
We use there MqttCallback with this semantic:
* #param topic name of the topic on the message was published to
* #param message the actual message.
* #throws Exception if a terminal error has occurred, and the client should be
* shut down.
*/
public void messageArrived(String topic, MqttMessage message) throws Exception;
So, we can't batch them there on this Channel Adapter by nature of the Paho client.
What we can suggest you from the Spring Integration perspective is an Aggregator EIP implementation.
In your case you should add #ServiceActivator for the AggregatorFactoryBean #Bean before that mqttRouterChannel, before sending to the router.
That maybe as simple as:
#Bean
#ServiceActivator(inputChannel = "mqttAggregatorChannel")
AggregatorFactoryBean mqttAggregator() {
AggregatorFactoryBean aggregator = new AggregatorFactoryBean();
aggregator.setProcessorBean(new DefaultAggregatingMessageGroupProcessor());
aggregator.setCorrelationStrategy(m -> 1);
aggregator.setReleaseStrategy(new MessageCountReleaseStrategy(10));
aggregator.setExpireGroupsUponCompletion(true);
aggregator.setSendPartialResultOnExpiry(true);
aggregator.setGroupTimeoutExpression(new ValueExpression<>(1000));
aggregator.setOutputChannelName("mqttRouterChannel");
return aggregator;
}
See more information in the Reference Manual.

Resources