I have a number of microservices, each running in its own container in a load balanced environment. I have a need for each instance of these microservices to create a rabbitmq queue when it starts up and delete it when it stops. I have currently defined the following property in my application properties file:
config_queue: config_${PID}
My message queue listener looks like this:
public class ConfigListener {
Logger logger = LoggerFactory.getLogger(ConfigListener.class);
// https://www.programcreek.com/java-api-examples/index.php?api=org.springframework.amqp.rabbit.annotation.RabbitListener
#RabbitListener(bindings = #QueueBinding(
value = #Queue(value = "${config_queue}",
autoDelete = "true"),
exchange = #Exchange(value = AppConstants.TOPIC_CONFIGURATION,
type= ExchangeTypes.FANOUT)
))
public void configChanged(String message){
... application logic
}
}
All this works great when I run the microservice. A queue with prefix config and process id gets created and is auto deleted when I stop the service.
However, when I run this service and others in their individual docker containers, all services have the same PID and that is 1.
Does anybody have any idea how I can create specify a queue that is unique to that instance.
Thanks in advance for your help.
Use an AnonymousQueue instead:
#SpringBootApplication
public class So72030217Application {
public static void main(String[] args) {
SpringApplication.run(So72030217Application.class, args);
}
#RabbitListener(queues = "#{configQueue.name}")
public void listen(String in) {
System.out.println(in);
}
}
#Configuration
class Config {
#Bean
FanoutExchange fanout() {
return new FanoutExchange("config");
}
#Bean
Queue configQueue() {
return new AnonymousQueue(new Base64UrlNamingStrategy("config_"));
}
#Bean
Binding binding() {
return BindingBuilder.bind(configQueue()).to(fanout());
}
}
AnonymousQueues are auto-delete and use a Base64 encoded UUID in the name.
Related
I would like to use the java.util.Function approach to reply to an request send via RabbitTemplate.convertSendAndReceive. It's working fine with the RabbitListener but I can not get it working with the functional approach.
Client (working)
class Client(private val template RabbitTemplate) {
fun send() = template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message"
)
}
Server (approach 1, working)
class Server {
#RabbitListener(queues = ["rpc-queue"])
fun receiveRequest(message: String) = "Response Message"
#Bean
fun queue(): Queue {
return Queue("rpc-queue")
}
#Bean
fun exchange(): DirectExchange {
return DirectExchange("rpc-exchange")
}
#Bean
fun binding(exchange: DirectExchange, queue: Queue): Binding {
return BindingBuilder.bind(queue).to(exchange).with("rpc-routing-key")
}
}
Server (approach 2, not working) --> goal
class Server {
#Bean
fun receiveRequest(): Function<String, String> {
return Function { value: String ->
"Response Message"
}
}
}
With the config (approach 2)
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.binding.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.binding.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
With approach 2 the server receives. Unfortunately the response is lost. Does anybody know how to use the RPC pattern with the functional approach? I don't want to use the RabbitListener.
See documentation/tutorial.
Spring Cloud Stream is not really designed for RPC on the server side, so it won't handle this automatically like #RabbitListener does.
You can, however, achieve it by adding an output binding to route the reply to the default exchange and the replyTo header:
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.bindings.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.bindings.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
spring.cloud.stream.bindings.receiveRequest-out-0.destination=
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.routing-key-expression=headers['amqp_replyTo']
#logging.level.org.springframework.amqp=debug
#SpringBootApplication
public class So66586230Application {
public static void main(String[] args) {
SpringApplication.run(So66586230Application.class, args);
}
#Bean
Function<String, String> receiveRequest() {
return str -> {
return str.toUpperCase();
};
}
#Bean
public ApplicationRunner runner(RabbitTemplate template) {
return args -> {
System.out.println(new String((byte[]) template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message")));
};
}
}
PAYLOAD MESSAGE
Note that the reply will come as a byte[]; you can use a custom message converter on the template to convert to String.
EDIT
In reply to the third comment below.
The RabbitTemplate uses direct reply-to by default, so the reply address is not a real queue, it is a pseudo queue created by the binder and associated with a consumer in the template.
You can also configure the template to use temporary reply queues, but they are also routed to by the default exchange "".
You can, however, configure an external reply container, with the template as the listener.
You can then route back using whatever exchange and routing key you want.
Putting it all together:
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.bindings.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.bindings.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
spring.cloud.stream.bindings.receiveRequest-out-0.destination=reply-exchange
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.routing-key-expression='reply-routing-key'
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.declare-exchange=false
spring.rabbitmq.template.reply-timeout=10000
#logging.level.org.springframework.amqp=debug
public class So66586230Application {
public static void main(String[] args) {
SpringApplication.run(So66586230Application.class, args);
}
#Bean
Function<String, String> receiveRequest() {
return str -> {
return str.toUpperCase();
};
}
#Bean
SimpleMessageListenerContainer replyContainer(SimpleRabbitListenerContainerFactory factory,
RabbitTemplate template) {
template.setReplyAddress("reply-queue");
SimpleMessageListenerContainer container = factory.createListenerContainer();
container.setQueueNames("reply-queue");
container.setMessageListener(template);
return container;
}
#Bean
public ApplicationRunner runner(RabbitTemplate template, SimpleMessageListenerContainer replyContainer) {
return args -> {
System.out.println(new String((byte[]) template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message")));
};
}
}
IMPORTANT: if you have multiple instances of the client side, each needs its own reply queue.
In that case, the routing key must be the queue name and you should revert to the previous example to set the routing key expression (to get the queue name from the header).
I'm learning Cloudstream and cannot map the cloudstream Function<String, String> into existing queue.
I'm just creating the hello world app from spring cloud documentation, but don't really understand this part regarding binding names.
I have q.test (existing) on my rabbitmq app, but when I use this code and configuration, my app always create new queue q.test.anonymous.someRandomString.
Anybody has configuration example for this?
#SpringBootApplication
public class CloudstreamApplication {
public static void main(String[] args) {
SpringApplication.run(CloudstreamApplication.class, args);
}
#Bean
public Function<String, String> uppercase() {
return value -> {
System.out.println("Received: " + value);
return value.toUpperCase();
};
}
}
application.yml
spring.cloud.stream:
function.bindings:
uppercase-in-0: q.test
bindings:
uppercase-in-0.destination: q.test
Thanks
See the binder documentation - Using Existing Queues/Exchanges.
If you have an existing exchange/queue that you wish to use, you can completely disable automatic provisioning as follows, assuming the exchange is named myExchange and the queue is named myQueue:
spring.cloud.stream.bindings.<binding name>.destination=myExhange
spring.cloud.stream.bindings.<binding name>.group=myQueue
spring.cloud.stream.rabbit.bindings.<binding name>.consumer.bindQueue=false
spring.cloud.stream.rabbit.bindings.<binding name>.consumer.declareExchange=false
spring.cloud.stream.rabbit.bindings.<binding name>.consumer.queueNameGroupOnly=true
with spring boot 1.5.9 RELEASE, code as below
#Configuration
#EnableRabbit
public class RabbitmqConfig {
#Autowired
ConnectionFactory connectionFactory;
#Bean//with or without this bean, neither works
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(connectionFactory);
}
#Bean
public Queue bbbQueue() {
return new Queue("bbb");
}
#Bean
public TopicExchange requestExchange() {
return new TopicExchange("request");
}
#Bean
public Binding bbbBinding() {
return BindingBuilder.bind(bbbQueue())
.to(requestExchange())
.with("*");
}
}
After the jar stars, there is no error message and there is no topic exchange showing in RabbitMQ managementUI(15672) exchanges page.
However, with python code, topic exchange shows and the binding can be seen on exchange detaile page. python code as below
connection = pika.BlockingConnection(pika.ConnectionParameters(host='10.189.134.47'))
channel = connection.channel()
channel.exchange_declare(exchange='request', exchange_type='topic', durable=True)
result = channel.queue_declare(queue='aaa', durable=True)
queue_name = result.method.queue
channel.queue_bind(exchange='aaa', routing_key='*',
queue=queue_name)
print(' [*] Waiting for logs. To exit press CTRL+C')
def callback(ch, method, properties, body):
print(" [x] %r" % body)
channel.basic_consume(callback, queue=queue_name, no_ack=True)
channel.start_consuming()
I just copied your code and it works fine.
NOTE The queue/binding won't be declared until a connection is opened, such as by a listener container that reads from the queue (or sending a message with a RabbitTemplate).
#RabbitListener(queues = "bbb")
public void listen(String in) {
System.out.println(in);
}
The container must have autoStartup=true (default).
When trying to implement a Unit-test in a spring-boot application, I can't retrieve a ConsumerRecord, though a custom Serializer using an own POJO is working. I checked it with the kafka-console-consumer, where a new message is each and every time I run the test generated and appears on the console.
What do I have to do to get the record instead of a null?
#RunWith(SpringRunner.class)
#SpringBootTest
#DisplayName("Testing GlobalMessageTest")
#DirtiesContext
public class NumberPlateSenderTest {
private static Logger log = LogManager.getLogger(NumberPlateSenderTest.class);
#Autowired
KafkaeskAdapterApplication kafkaeskAdapterApplication;
#Autowired
private NumberPlateSender numberPlateSender;
private KafkaMessageListenerContainer<String, NumberPlate> container;
private BlockingQueue<ConsumerRecord<String, NumberPlate>> records;
private static final String SENDER_TOPIC = "numberplate_test_topic";
#ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, SENDER_TOPIC);
#Before
public void setUp() throws Exception {
// set up the Kafka consumer properties
Map<String, Object> consumerProperties = KafkaTestUtils.consumerProps("sender", "false", embeddedKafka);
consumerProperties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProperties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, NumberPlateDeserializer.class);
// create a Kafka consumer factory
DefaultKafkaConsumerFactory<String, NumberPlate> consumerFactory =
new DefaultKafkaConsumerFactory<>(consumerProperties);
// set the topic that needs to be consumed
ContainerProperties containerProperties = new ContainerProperties(SENDER_TOPIC);
// create a Kafka MessageListenerContainer
container = new KafkaMessageListenerContainer<>(consumerFactory, containerProperties);
// create a thread safe queue to store the received message
records = new LinkedBlockingQueue<>();
// setup a Kafka message listener
container.setupMessageListener((MessageListener<String, NumberPlate>) record -> {
log.info("Message Listener received message='{}'", record.toString());
records.add(record);
});
// start the container and underlying message listener
container.start();
// wait until the container has the required number of assigned partitions
ContainerTestUtils.waitForAssignment(container, embeddedKafka.getPartitionsPerTopic());
}
#DisplayName("Should send a Message to a Producer and retrieve it")
#Test
public void TestProducer() throws InterruptedException {
//Test instance of Numberplate to send
NumberPlate localNumberplate = new NumberPlate();
byte[] bytes = "0x33".getBytes();
localNumberplate.setImageBlob(bytes);
localNumberplate.setNumberString("ABC123");
log.info(localNumberplate.toString());
//Send it
numberPlateSender.sendNumberPlateMessage(localNumberplate);
//Retrieve it
ConsumerRecord<String, NumberPlate> received = records.poll(3, TimeUnit.SECONDS);
log.info("Received the following content of ConsumerRecord: {}", received);
if (received == null) {
assert false;
} else {
NumberPlate retrNumberplate = received.value();
Assert.assertEquals(retrNumberplate, localNumberplate);
}
}
#After
public void tearDown() {
// stop the container
container.stop();
}
}
The complete code can be seen at my github repository.
I read a load of different SO questions and searched the web, but can't find an approach what is wrong with my code. Other users posted similar problems but to no avail.
The kafka version which runs on my Craptop is kafka_2.11-1.0.1
The springframework kafka Client is of version 2.1.5.RELEASE
Your problem that you start consumer against embedded Kafka, but send data to the real one. I don't know what is your goal, but I made it working against an embedded Kafka like this:
#BeforeClass
public static void setup() {
System.setProperty("kafka.bootstrapAddress", embeddedKafka.getBrokersAsString());
}
I override your kafka.bootstrapAddress configuration property for the producer with the broker address provided by the embedded Kafka.
In this case I fail with the:
java.lang.AssertionError: expected: dev.semo.kafkaeskadapter.models.NumberPlate<NumberPlate{numberString='ABC123', imageBlob=[48, 120, 51, 51]}> but was: dev.semo.kafkaeskadapter.models.NumberPlate<NumberPlate{numberString='ABC123', imageBlob=[48, 120, 51, 51]}>
Expected :dev.semo.kafkaeskadapter.models.NumberPlate<NumberPlate{numberString='ABC123', imageBlob=[48, 120, 51, 51]}>
Actual :dev.semo.kafkaeskadapter.models.NumberPlate<NumberPlate{numberString='ABC123', imageBlob=[48, 120, 51, 51]}>
But that's just because you use this assertion:
Assert.assertEquals(retrNumberplate, localNumberplate);
Meanwhile your NumberPlate doesn't provide a proper equals() implementation. Something like this:
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
NumberPlate that = (NumberPlate) o;
return Objects.equals(numberString, that.numberString) &&
Arrays.equals(imageBlob, that.imageBlob);
}
#Override
public int hashCode() {
int result = Objects.hash(numberString);
result = 31 * result + Arrays.hashCode(imageBlob);
return result;
}
Thank you for providing the whole project to play and reproduce! With the "question-answer-question-answer" game we would spend too much time here :-).
I am trying to implement re-routing of dead-lettered messages as described in this answer. I am using Spring config. I have no idea on how to read the headers to get the original routing key and original queue. The following is my config:
#Configuration
public class NotifEngineRabbitMQConfig {
#Bean
public MessageHandler handler(){
return new MessageHandler();
}
#Bean
public Jackson2JsonMessageConverter messageConverter(){
return new Jackson2JsonMessageConverter();
}
#Bean
public MessageListenerAdapter messageListenerAdapter(){
return new MessageListenerAdapter(handler(), messageConverter());
}
/**
* Listens for incoming messages
* Allows multiple queue to listen to
* */
#Bean
public SimpleMessageListenerContainer simpleMessageListenerContainer(){
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.addQueueNames(QUEUE_TO_LISTEN_TO.split(","));
container.setMessageListener(messageListenerAdapter());
container.setConnectionFactory(rabbitConnectionFactory());
container.setDefaultRequeueRejected(false);
return container;
}
#Bean
public ConnectionFactory rabbitConnectionFactory(){
CachingConnectionFactory factory = new CachingConnectionFactory(HOST);
factory.setUsername(USERNAME);
factory.setPassword(PASSWORD);
return factory;
}
}
The headers are not available using "old" style Pojo messaging (with a MessageListenerAdapter). You need to implement MessageListener which gives you access to the headers.
However, you will need to invoke the converter yourself in that case and, if you are using request/reply messaging, you lose the reply mechanism within the adapter and you have to send the reply yourself.
Alternatively, you can use a custom message converter and "enhance" the converted object with the header after invoking the standard converter.
Consider instead using the newer style POJO messaging with #RabbitListener - it gives you access to the headers and has request/reply capability.
Here's an example:
#SpringBootApplication
public class So37581560Application {
public static void main(String[] args) {
SpringApplication.run(So37581560Application.class, args);
}
#Bean
public FooListener fooListener() {
return new FooListener();
}
public static class FooListener {
#RabbitListener(queues="foo")
public void pojoListener(String body,
#Header(required = false, name = "x-death") List<String> xDeath) {
System.out.println(body + ":" + (xDeath == null ? "" : xDeath));
}
}
}
Result:
Foo:[{reason=expired, count=1, exchange=, time=Thu Jun 02 08:44:19 EDT 2016, routing-keys=[bar], queue=bar}]
Gary's answer is the right one. Just a little detail, the type of xDeath is better to be ArrayList<HashMap<String,*>> instead List<String> xDeath. Then you can access any field by doing something like: xDeath.first().get("count")