I am using Spring amqp 1.1 version as my java client.
I have a queue which has around 2000 messages. I want to have a service which checks this queue size and and if it is empty it will send out a message saying " All items processed".
I dont know how to get current queue size ? Please help
I googled and found a class "RabbitBrokerAdmin" that was present in earlier version 1.0.
I think it is not present in 1.1 now.
Any pointers in getting current queue size?
So I know this is a little late and a solution has already been found but here is another way to look message counts in your queues
This solution assumes that you are using the spring rabbitmq framework and have defined your queues in your application config with the following tags defined
<rabbit:queue>
<rabbit:admin>
The java class:
public class QueueStatsProcessor {
#Autowired
private RabbitAdmin admin;
#Autowired
private List<Queue> rabbitQueues;
public void getCounts(){
Properties props;
Integer messageCount;
for(Queue queue : rabbitQueues){
props = admin.getQueueProperties(queue.getName());
messageCount = Integer.parseInt(props.get("QUEUE_MESSAGE_COUNT").toString());
System.out.println(queue.getName() + " has " + messageCount + " messages");
}
}
}
You can also use this solution to read the current consumers attached to the queue
http://docs.spring.io/spring-amqp/docs/1.2.1.RELEASE/api/org/springframework/amqp/rabbit/core/RabbitAdmin.html#getQueueProperties(java.lang.String)
You can use the RabbitAdmin instance to get the details from the queue, as follows:
#Resource RabbitAdmin admin;
...
protected int getQueueCount(final String name) {
DeclareOk declareOk = admin.getRabbitTemplate().execute(new ChannelCallback<DeclareOk>() {
public DeclareOk doInRabbit(Channel channel) throws Exception {
return channel.queueDeclarePassive(name);
}
});
return declareOk.getMessageCount();
}
Related
I want to send something to Kafka topic in producer-only (not in read-write process) transaction using output-channel.
I read documentation and another topic on StackOverflow (Spring cloud stream kafka transactions in producer side).
Problem is that i need to set unique transactionIdPrefix per node.
Any suggestion how to do it?
Here is one way...
#Component
class TxIdCustomizer implements EnvironmentAware {
#Override
public void setEnvironment(Environment environment) {
Properties properties = new Properties();
properties.setProperty("spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix",
UUID.randomUUID().toString());
((StandardEnvironment) environment).getPropertySources()
.addLast(new PropertiesPropertySource("txId", properties));
}
}
I have two microservices, one for collecting XML files from internal FTP server ,transforming it to DTO objects and then publishing them as bytes in RabbitMQ and the other for deserializing the incoming bytes from RabbitMQ to DTO objects, mapping them to JPA entities and persisiting them to database.
I'd like configure RabbitMQ broker between these two microservices like below:
1) for microservice that collect XML files, I edited in application.properties as below:
spring.cloud.stream.bindings.output.destination=TOPIC
spring.cloud.stream.bindings.output.group=proactive-policy
2) for microservice that persist incoming DTO onjects, I configured in application.properties as following:
spring.cloud.stream.bindings.input.destination=TOPIC
spring.cloud.stream.bindings.input.group=proactive-policy
For receiving incoming bytes from RabbitMQ I'm using second microservice as sink:
#EnableJpaAuditing
#EnableBinding(Sink.class)
#SpringBootApplication(scanBasePackages = { "org.proactive.policy.data.cache" })
#RefreshScope
public class ProactivePolicyDataCacheApplication {
private static Logger logger = LoggerFactory.getLogger(ProactivePolicyDataCacheApplication.class);
#Autowired
PolicyService policyService;
public static void main(String[] args) {
SpringApplication.run(ProactivePolicyDataCacheApplication.class, args);
}
#StreamListener(Sink.INPUT)
public void input(Message<byte[]> message) throws Exception {
if (Objects.isNull(message) || Objects.isNull(message.getPayload())) {
logger.error("the message is null ");
throw new IllegalArgumentException("`message` and `message.payload` cannot be null");
}
byte[] data = message.getPayload();
if (data.length == 0) {
logger.warn("Received empty message");
return;
}
logger.info("Got data from policy-collector = " + new String(data, "UTF-8"));
PolicyListDto policyListDto = (PolicyListDto) SerializationUtils.deserialize(data);
logger.info("Policies.xml from policy-collector = " + policyListDto.getPolicy().toString());
policyService.save(policyListDto);
}
}
But when I open RabbitMQ console for looking at exchanges I didn't receive any thing in Queue TOPIC.proactive-policy But the incoming messages are received in another Queue that I haven't configured it named FTPSTREAM.proactive-policy-collector
Is there any suggestion for resolving this issue
Couple of points:
1. There is no such thing as 'group' for the output binding. Consumer Group is a consumer property. Here is the fragment of the javadocs.
/**
* Unique name that the binding belongs to (applies to consumers only). Multiple
* consumers within the same group share the subscription. A null or empty String
* value indicates an anonymous group that is not shared.
* #see org.springframework.cloud.stream.binder.Binder#bindConsumer(java.lang.String,
* java.lang.String, java.lang.Object,
* org.springframework.cloud.stream.binder.ConsumerProperties)
*/
private String group;
2. The name 'FTPSTREAM.proactive-policy-collector' is definitely not something that is generated by the spring-cloud-stream, so consider looking into your configuration and see what have you missed.
It tells me that you have some consumer that has its 'destination' named FTPSTREAM and its 'group' proactive-policy-collector. It also tells me that your producer sends messages to the FTPSTREAM exchange.
I am trying to understand why I would want to use Spring cloud stream with RabbitMQ. I've had a look at the RabbitMQ Spring tutorial 4 (https://www.rabbitmq.com/tutorials/tutorial-four-spring-amqp.html) which is basically what I want to do. It creates a direct exchange with 2 queues attached and depending on the routing key a message is either routed to Q1 or to Q2.
The whole process is pretty straight forward if you look at the tutorial, you create all the parts, bind them together and youre ready to go.
I was wondering what benefit I would gain in using Sing Cloud Stream and if that is even the use case for it. It was easy to create a simple exchange and even defining destination and group was straight forward with stream. So I thought why not go further and try to handle the tutorial case with stream.
I have seen that Stream has a BinderAwareChannelResolver which seems to do the same thing. But I am struggling to put it all together to achieve the same as in the RabbitMQ Spring tutorial. I am not sure if it is a dependency issue, but I seem to misunderstand something fundamentally here, I thought something like:
spring.cloud.stream.bindings.output.destination=myDestination
spring.cloud.stream.bindings.output.group=consumerGroup
spring.cloud.stream.rabbit.bindings.output.producer.routing-key-expression='key'
should to the trick.
Is there anyone with a minimal example for a source and sink which basically creates a direct exchange, binds 2 queues to it and depending on routing key routes to either one of those 2 queues like in https://www.rabbitmq.com/tutorials/tutorial-four-spring-amqp.html?
EDIT:
Below is a minimal set of code which demonstrates how to do what I asked. I did not attach the build.gradle as it is straight forward (but if anyone is interested, let me know)
application.properties: setup the producer
spring.cloud.stream.bindings.output.destination=tut.direct
spring.cloud.stream.rabbit.bindings.output.producer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.output.producer.routing-key-expression=headers.type
Sources.class: setup the producers channel
public interface Sources {
String OUTPUT = "output";
#Output(Sources.OUTPUT)
MessageChannel output();
}
StatusController.class: Respond to rest calls and send message with specific routing keys
/**
* Status endpoint for the health-check service.
*/
#RestController
#EnableBinding(Sources.class)
public class StatusController {
private int index;
private int count;
private final String[] keys = {"orange", "black", "green"};
private Sources sources;
private StatusService status;
#Autowired
public StatusController(Sources sources, StatusService status) {
this.sources = sources;
this.status = status;
}
/**
* Service available, service returns "OK"'.
* #return The Status of the service.
*/
#RequestMapping("/status")
public String status() {
String status = this.status.getStatus();
StringBuilder builder = new StringBuilder("Hello to ");
if (++this.index == 3) {
this.index = 0;
}
String key = keys[this.index];
builder.append(key).append(' ');
builder.append(Integer.toString(++this.count));
String payload = builder.toString();
log.info(payload);
// add kv pair - routingkeyexpression (which matches 'type') will then evaluate
// and add the value as routing key
Message<String> msg = new GenericMessage<>(payload, Collections.singletonMap("type", key));
sources.output().send(msg);
// return rest call
return status;
}
}
consumer side of things, properties:
spring.cloud.stream.bindings.input.destination=tut.direct
spring.cloud.stream.rabbit.bindings.input.consumer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.input.consumer.bindingRoutingKey=orange
spring.cloud.stream.bindings.inputer.destination=tut.direct
spring.cloud.stream.rabbit.bindings.inputer.consumer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.inputer.consumer.bindingRoutingKey=black
Sinks.class:
public interface Sinks {
String INPUT = "input";
#Input(Sinks.INPUT)
SubscribableChannel input();
String INPUTER = "inputer";
#Input(Sinks.INPUTER)
SubscribableChannel inputer();
}
ReceiveStatus.class: Receive the status:
#EnableBinding(Sinks.class)
public class ReceiveStatus {
#StreamListener(Sinks.INPUT)
public void receiveStatusOrange(String msg) {
log.info("I received a message. It was orange number: {}", msg);
}
#StreamListener(Sinks.INPUTER)
public void receiveStatusBlack(String msg) {
log.info("I received a message. It was black number: {}", msg);
}
}
Spring Cloud Stream lets you develop event driven micro service applications by enabling the applications to connect (via #EnableBinding) to the external messaging systems using the Spring Cloud Stream Binder implementations (Kafka, RabbitMQ, JMS binders etc.,). Apparently, Spring Cloud Stream uses Spring AMQP for the RabbitMQ binder implementation.
The BinderAwareChannelResolver is applicable for dynamically binding support for the producers and I think in your case it is about configuring the exchanges and binding of consumers to that exchange.
For instance, you need to have 2 consumers with the appropriate bindingRoutingKey set based on your criteria and a single producer with the properties(routing-key-expression, destination) you mentioned above (except the group). I noticed that you have configured group for the outbound channel. The group property is applicable only for the consumers (hence inbound).
You might also want to check this one: https://github.com/spring-cloud/spring-cloud-stream-binder-rabbit/issues/57 as I see some discussion around using routing-key-expression. Specifically, check this one on using the expression value.
maybe someone has an idea to my following problem:
I am currently on a project, where i want to use the AWS SQS with Spring Cloud integration. For the receiver part i want to provide a API, where a user can register a "message handler" on a queue, which is an interface and will contain the user's business logic, e.g.
MyAwsSqsReceiver receiver = new MyAwsSqsReceiver();
receiver.register("a-queue-name", new MessageHandler(){
#Override
public void handle(String message){
//... business logic for the received message
}
});
I found examples, e.g.
https://codemason.me/2016/03/12/amazon-aws-sqs-with-spring-cloud/
and read the docu
http://cloud.spring.io/spring-cloud-aws/spring-cloud-aws.html#_sqs_support
But the only thing i found there to "connect" a functionality for processing a incoming message is a annotation on a method, e.g. #SqsListener or #MessageMapping.
These annotations are fixed to a certain queue-name, though. So now i am at a loss, how to dynamically "connect" my provided "MessageHandler" (from my API) to the incoming message for the specified queuename.
In the Config the example there is a SimpleMessageListenerContainer, which gets a QueueMessageHandler set, but this QueueMessageHandler does not seem
to be the right place to set my handler or to override its methods and provide my own subclass of QueueMessageHandler.
I already did something like this with the Spring Amqp integration and RabbitMq and thought, that it would be also similar here with AWS SQS.
Does anyone have an idea, how to accomplish this?
thx + bye,
Ximon
EDIT:
I found, that Spring JMS could actually do that, e.g. www.javacodegeeks.com/2016/02/aws-sqs-spring-jms-integration.html. Does anybody know, what consequences using JMS protocol has here, good or bad?
I am facing the same issue.
I am trying to go in an unusual way where I set up an Aws client bean at build time and then instead of using sqslistener annotation to consume from the specific queue I use the scheduled annotation which I can programmatically pool (each 10 secs in my case) from which queue I want to consume.
I did the example that iterates over queues defined in properties and then consumes from each one.
Client Bean:
#Bean
#Primary
public AmazonSQSAsync awsSqsClient() {
return AmazonSQSAsyncClientBuilder
.standard()
.withRegion(Regions.EU_WEST_1.getName())
.build();
}
Consumer:
// injected in the constructor
private final AmazonSQSAsync awsSqsClient;
#Scheduled(fixedDelay = 10000)
public void pool() {
properties.getSqsQueues()
.forEach(queue -> {
val receiveMessageRequest = new ReceiveMessageRequest(queue)
.withWaitTimeSeconds(10)
.withMaxNumberOfMessages(10);
// reading the messages
val result = awsSqsClient.receiveMessage(receiveMessageRequest);
val sqsMessages = result.getMessages();
log.info("Received Message on queue {}: message = {}", queue, sqsMessages.toString());
// deleting the messages
sqsMessages.forEach(message -> {
val deleteMessageRequest = new DeleteMessageRequest(queue, message.getReceiptHandle());
awsSqsClient.deleteMessage(deleteMessageRequest);
});
});
}
Just to clarify, in my case, I need multiple queues, one for each tenant, with the queue URL for each one passed in a property file. Of course, in your case, you could get the queue names from another source, maybe a ThreadLocal which has the queues you have created in runtime.
If you wish, you can also try the JMS approach where you create message consumers and add a listener to each one you wish (See the doc Aws Jms documentation).
When we do Spring and SQS we use the spring-cloud-starter-aws-messaging.
Then just create a Listener class
#Component
public class MyListener {
#SQSListener(value="myqueue")
public void listen(MyMessageType message) {
//process the message
}
}
If I want to use Spring Intergration and if I want to do a rollback of a message, that I have sent before, which kind of TransactionManger is to use. I did not want to use jms active-mq or such things, only sending an event to a queue:
class DatingServiceImpl {
#Autowired
final RendezvousChannel rendezvousChannel
#Autowired
final GirlsRepository girlsRepository
#Transactional()
public final date(final String name ) {
rendezvousChannel.send(String.format("Hello %s", name ), 100);
if( girlsRepository.forName(name).hotScore < 8 ) {
throw new IllegalStateException("No I put it over");
}
}
}
You should use a JmsTransactionManager provided by:
org.springframework.jms.connection.JmsTransactionManager.
The API can be seen here.
...I did not want to use jms active-mq or such things...
There is no such transaction manager - the framework itself is not transactional; to get transactions, the channel has to be backed by some transactional resource such as JMS or JDBC.