sending to queue with and without selector creates big queue - jms

I have 2 consumers in my system, that consume from same destination, but some of the messages have specific selector, and some of them not.
as a result, i see that a lot of messages are stuck in the selector consumer(they are not matching the selector)
something like that:
consumer1: myMessageType = 'Funny'
consumer2: myMessageType = 'Sad'
consumer3: no selector defined
Message 1 : myMessageType = 'Funny'
Message 2 : myMessageType = 'Funny'
Message 3 : myMessageType = 'Sad'
Message 4 : myMessageType = 'Sad'
Message 5 : myMessageType = 'Weird'
Message 6 : myMessageType = 'Weird'
and when i look at the queue(hawtio console), i see that consumer 1 and 2 have a lot of messages in queue, and they cannot consume them because lack of selector in the messages
why is that? am i abusing the amq system?

Queues can only provide messages to consumers within the maxPageSize. This is done for performance reasons-- to avoid scanning the entire data store for messages. If consumers are starved of messages, it means you have a gap in your consumer selector coverage.
You either need to:
Add a consumer with a selector that catches all the 'rest' of messages
Move to server-side filtering of messages using filtered composite destinations
Add a content-based router (ie Camel, Mule, etc) to sort messages into individual queues for consumers so they do not need selectors.
There is a pretty good case to be made that options #2 and #3 are cleaner architecture than trying to solve it with #1, since it puts all the information about the selectors in one place, vs scattered in different consumer configurations.

Related

ActiveMQ messageId not working to stop duplication

I am using ActiveMQ for messaging and there is one requirement that if message is duplicate then it should handled by AMQ automatically.
For that I generate unique message key and set to messageproccessor.
following is code :
jmsTemplate.convertAndSend(dataQueue, event, messagePostProccessor -> {
LocalDateTime dt = LocalDateTime.now();
long ms = dt.get(ChronoField.MILLI_OF_DAY) / 1000;
String messageUniqueId = event.getResource() + event.getEntityId() + ms;
System.out.println("messageUniqueId : " + messageUniqueId);
messagePostProccessor.setJMSMessageID(messageUniqueId);
messagePostProccessor.setJMSCorrelationID(messageUniqueId);
return messagePostProccessor;
});
As it can be seen code generates unique id and then set it to messagepostproccessor.
Can somehelp me on this, is there any other configuration that I need do.
A consumer can receive duplicate messages mainly for two reasons: a producer sent the same message more times or a consumer receive the same message more times.
Apache ActiveMQ Artemis includes powerful automatic duplicate message detection, filtering out messages sent by a producer more times.
To prevent a consumer from receiving the same message more times, an idempotent consumer must be implemented, ie Apache Camel provides an Idempotent consumer component that would work with any JMS provider, see: http://camel.apache.org/idempotent-consumer.html

Spring Cloud Stream Listener not pausing / waiting for the messages in Integration Testing Code

I am having a Application which connects to RabbitMQ through Spring Cloud Stream, which works prefectly.
For Integration test cases i am trying to use the sample - https://github.com/piomin/sample-message-driven-microservices/blob/master/account-service/src/test/java/pl/piomin/services/account/OrderReceiverTest.java
However, in my case my application sends back 3 messages in some time Interval. So if i put the below Lines, it fetches the messages, but if the there is a delay in getting the messages.
int i = 1;
while (i > 0) {
Message<String> received = (Message<String>) collector.forChannel(channels.statusMessage()).poll();
if (received != null) {
LOGGER.info("Order response received: {}", received.getPayload());
}
}
So Instead of my custom polling, is there any way i can wait and Poll for my messages, and stop when i get those ?
I want to get the pick Messages based on the Response Routing Key to different Channels. Is it possible ?
--> Example: If the routingKey is "InProcess" , it should go to Inprocess Method.
1) Your question is not at all clear, expand on it and explain exactly what you mean.
2) Routing keys are used within Rabbit to route to different queues, they are not used within the framework to route to channels or methods.
You can, however, use a condition on the #StreamListener (match on the headers['amqp_receivedRoutingKey]`), but it's better to route messages to different queues instead.

JMS Delayed Delivery based on conditional variable(s)

I'm looking for a possibility in any of the more popular message queues (AMPQ, RabbitMQ, ActiveMQ, etc) to conditionally delay the delivery of a message.
For example:
System A sends a message(foo, condition = bar.x > 1);
System B sends a message(bar, x = 2)
Because the message of System B satisfies the condition set on the Message for System A, the message is unlocked and delivered.
Do such strategies exist?
Sort of, yes, with RabbitMQ.
You need two things:
code that checks the condition - your code, not RabbitMQ code.
the Delayed Message Exchange plugin https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/
RabbitMQ does not have the ability to process logic statements or code. But you are already writing code so you can easily do that in your code.
If the condition is true, then send your message to the Delayed Message Exchange. If it is not true, send your message to a normal exchange.

Horizontally and vertically scaling RabbitMQ consumers?

How do I scale RabbitMQ consumers in Ruby using the AMQP gem?
I read the documentation and came up with something that (seemingly) works
in a trivial example. The examples scale horizontally. New processes connect
to the broker and receive a subset of messages. From there each process can
spin up multiple consumer threads. It uses the low level consumer interface
described in the documentation.
Here's the code:
require 'amqp'
workers = ARGV[1] || 4
puts "Running #{workers} workers"
AMQP.start do |amqp|
channel = AMQP::Channel.new
channel.on_error do |conn, ex|
raise ex.reply_text
end
exchange = channel.fanout 'scaling.test', durable: true, prefetch: 1
queue = channel.queue("worker_queue", auto_delete: true).bind(exchange)
workers.times do |i|
consumer = AMQP::Consumer.new channel, queue, "consumer-#{i}", exclusive = false, manual_ack = false
consumer.consume.on_delivery do |meta, payload|
meta.ack
puts "Consumer #{consumer.consumer_tag} in #{Process.pid} got #{payload}"
end
end
trap('SIGTERM') do
amqp.start { EM.stop }
end
end
There are a few things I'm unsure of:
Does the exchange type matter? The documentation states a direct exchange load balances
messages between queues. I tested this example using direct and fanout exchanges and it
functioned the same way. So if I'd like to support vertical and horizontal scaling does the
exchange type matter?
What should the :prefetch option be? I thought one would be best.
How does the load balancing work specifically? The documentation states that load
balancing happens between consumers and not between queues. However when I run two
processes I can see process one print out: "1, 2, 3, 4", then process two print out
"5, 6, 7, 8". I thought they would be out of order, or is the channel itself the consumer?
This would make sense in accordance to the output but not the documentation.
Does this look correct from the EventMachine perspective? Do I need to do some sort of
thread pooling to get multiple consumers in the same process working correctly?
Most of this is actually covered in the documentation for brokers like RabbitMQ, but in answer to your questions:
For a worker queue you most likely want a direct exchange, that is, one which will route a message (job) to one worker exactly, and not to multiple workers at once. But this might change depending on your work. Fanout by definition should route the same message to multiple consumers.
Prefetch should be 1 for this time of setup. Generally speaking this asks the broker to fill the consumer's network buffer with 1 message until ack'd. An alternate setup would be that you have 1 consumer and n workers, in which case you would set the prefetch to n. Additionally it is worth noting that in this sort of setup you shouldn't ack until after you've done the work.
Load balancing is basically a round-robin between consumers. Which is why you're seeing everything sequentially. If each bit of work takes a different amount of time you'll see the distribution change.
I hope that helps. I haven't done anything with the Ruby AMQP library for a while — we rewrote all our workers in Go.

Publisher finishes before subscriber and messages are lost - why?

Fairly new to zeromq and trying to get a basic pub/sub to work. When I run the following (sub starting before pub) the publisher finishes but the subscriber hangs having not received all the messages - why ?
I think the socket is being closed but the messages have been sent ? Is there a way of ensuring all messages are received ?
Publisher:
import zmq
import random
import time
import tnetstring
context=zmq.Context()
socket=context.socket(zmq.PUB)
socket.bind("tcp://*:5556")
y=0
for x in xrange(5000):
st = random.randrange(1,10)
data = []
data.append(random.randrange(1,100000))
data.append(int(time.time()))
data.append(random.uniform(1.0,10.0))
s = tnetstring.dumps(data)
print 'Sending ...%d %s' % (st,s)
socket.send("%d %s" % (st,s))
print "Messages sent: %d" % x
y+=1
print '*** SERVER FINISHED. # MESSAGES SENT = ' + str(y)
Subscriber :-
import sys
import zmq
import tnetstring
# Socket to talk to server
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:5556")
filter = "" # get all messages
socket.setsockopt(zmq.SUBSCRIBE, filter)
x=0
while True:
topic,data = socket.recv().split()
print "Topic: %s, Data = %s. Total # Messages = %d" % (topic,data,x)
x+=1
In ZeroMQ, clients and servers always try to reconnect; they won't go down if the other side disconnects (because in many cases you'd want them to resume talking if the other side comes up again). So in your test code, the client will just wait until the server starts sending messages again, unless you stop recv()ing messages at some point.
In your specific instance, you may want to investigate using the socket.close() and context.term(). It will block until all the messages have been sent. You also have the problem of a slow joiner. You can add a sleep after the bind, but before you start publishing. This works in a test case, but you will want to really understand what is the solution vs a band-aid.
You need to think of the PUB/SUB pattern like a radio. The sender and receiver are both asynchronous. The Publisher will continue to send even if no one is listening. The subscriber will only receive data if it is listening. If the network goes down in the middle, the data will be lost.
You need to understand this in order to design your messages. For example, if you design your messages to be "idempotent", it doesn't matter if you lose data. An example of this would be a status type message. It doesn't matter if you have any of the previous statuses. The latest one is correct and message loss doesn't matter. The benefits to this approach is that you end up with a more robust and performant system. The downsides are when you can't design your messages this way.
Your example includes a type of message that requires no loss. Another type of message would be transactional. For example, if you just sent the deltas of what changed in your system, you would not be able to lose the messages. Database replication is often managed this way which is why db replication is often so fragile. To try to provide guarantees, you need to do a couple things. One thing is to add a persistent cache. Each message sent needs to be logged in the persistent cache. Each message needs to be assigned a unique id (preferably a sequence) so that the clients can determine if they are missing a message. A second socket (ROUTER/REQ) needs to be added for the client to request the missing messages individually. Alternatively, you could just use the secondary socket to request resending over the PUB/SUB. The clients would then all receive the messages again (which works for the multicast version). The clients would ignore the messages they had already seen. NOTE: this follows the MAJORDOMO pattern found in the ZeroMQ guide.
An alternative approach is to create your own broker using the ROUTER/DEALER sockets. When the ROUTER socket saw each DEALER connect, it would store its ID. When the ROUTER needed to send data, it would iterate over all client IDs and publish the message. Each message should contain a sequence so that the client can know what missing messages to request. NOTE: this is a sort of reimplementation of Kafka from linkedin.

Resources