Message Brokers - Multiple consumers with this same client ID - spring-boot

I've been considering some multiple consumer problem with my system and unfortunately I'm really stuck. I mean - I see some solutions, which are below, but they are propably not enought efficient. Let me introduce system:
User has their specific Id. User can be logged on diffrent devices - many mobiles, many browsers in this same time. When user is offline every message which user has got should be provide when user is online. When user is online, should be kept informed on a regular basis about messages. Every user, when is online is connected by WebSocket.
So I have been thinking about message brokers from this pool - rabbitmq, kafka, apache pulsar ( All system above will be in Java ). And this is my thoughs about this:
Rabbitmq - Every device gets their own queue associated with client Id. But here i see some troubles. For example - user gonna log in 4 browsers and each will get new queue ( of course I assume that some not used queue will be deleted after time but this solution can be overloaded if somebody just wanna do this ).
One queue with marker for every user which messages were consumed - I tried implement this with apache Pulsar but my every attempt was running out with created new consumer (not continue as the same consumer) - Maybe I can't use this API?
Apache kafka - groups and partitions? Its similiar to point 1.
I will be really gratefull for every single hint - If You see some better solutions with other technology just let me know - I will adjust to this.
( If it matters - Java and SpringBoot are core of this )

I can respond for the apache pulsar part - you need to set SubscriptionName in consumer to be equal to your UserId, this will ensure messages to be consumed starting from the last acknowledged one for that user.

Related

JMS + ActiveMQ: exclusive access to data

Good day everyone.
1) I have a simple app which creates JMSProducer, ActiveMQ Query and send some messages to query.
2) I also have the application which is a subscriber
of ActiveMQ Query (it receives messages from application above).
This is the situation:
I create another server config to my subscriber-App, and launch it twice at a time on different ports.
(for example: subscriber-App1 started at jetty-http-port-9998/jetty-ssl-port-9994; subscriber-App2 started at jetty-http-port-9999/jetty-ssl-port-9995).
I open subscriber-App1 console and subscriber-App2 console at Intellij IDEa and begin to send messages by producer-App. And I see that subscribers takes messages by rotation: when I send it the first time - subscriber1 takes it; 2nd time - subscriber2; 3rd time - subscriber1 etc.
The question is: how can I configure subscriber-Application to give it exclusive access to data? The main condition is: if there is the one Subscriber of my Queue - another applications can't receive messages from Queue. And if I launch two subscriberApps on different ports - all messages will be received by only one of them.
Thank you in advance!
I believe you should be able to use the exclusive consumer feature to achieve your goal.

How to retrieve all the messages present in the solace queue

I want to know how do I retrieve the messages already present on the Solace Queue. I am able to send and receive the messages I created from my machine but can't receive any messages that are already present in the queue. I want to retrieve the messages and store it in a text file.
I am sending my messages by integrating Solace APIs in Gradle and writing code in Java. Can anyone guide me regarding the same?
There's an exact tutorial for this.
If you had downloaded the Solace Java JAR via the Maven links, you might have missed the entire suite, which contains all the dependent JARs distributed by Solace, API reference docs, as well as a bunch of samples. The latter is in addition to what you may find on http://dev.solace.com/get-started/java-tutorials/. Get the entire ZIP file, as well as the Release Notes, from http://dev.solace.com/downloads/.
There are multiple possibilities why you cannot receive messages from a queue:
Queue name is misspelt.
Queue permissions are wrong.
Queue is shut down on the egress.
Message spool is not active on the router.
Client profile is set not to receive Guaranteed Messages.
Number of egress flows has exceeded the router / message-vpn limit.
Bind count on the queue has exceeded.
The egress flow is not active.
Client is not connected to the router.
...
Examining the error / exception will give you information why you cannot receive messages.

Azure Queue delayed message

I has some strange behaviour on production deployment for azure queue messages:
Some of the messages in queues appears with big delay - minutes, and sometimes 10 minutes.
Befere you ask about setting delayTimeout when we put message to queue - we do not set delayTimeout for that message, so message should appear almost immedeatly after it was placed in queue.
At that moments we do not have a big load. So my instances has no work load, and able to process message fast, but they just don't appear.
Our service process millions of messages per month, we able to identify that 10-50 messages processed with very big delay, by that we fail SLA in front of our customers.
Does anyone have any idea what can be reason?
How to overcome?
Did anyone faced similar issues?
Some general ideas for troubleshooting:
Are you certain that the message was queued up for processing - ie the queue.addmessage operation returned successfully and then you are waiting 10 minutes - meaning you can rule out any client side retry policies etc as being the cause of the problem.
Is there any chance that the time calculation could be subject to some kind of clock skew problems. eg - if one of the worker roles pulling messages has its close out of sync with the other worker roles you could see this.
Is it possible that in the situations where the message is appearing to be delayed that a worker role responsible for pulling the messages is actually failing or crashing. If the client calls GetMessage but does not respond with an appropriate acknowledgement within the time specified by the invisibilityTimeout setting then the message will become visible again as the Queue Service assumes the client did not process the message. You could tell if this was a contributing factor by looking at the dequeue count on these messages that are taking longer. More information can be found here: http://msdn.microsoft.com/en-us/library/dd179474.aspx.
Is it possible that the number of workers you have pulling items from the queue is insufficient at certain times of the day and the delays are simply caused by the queue being populated faster than you can pull messages from the queue.
Have you enabled logging for queues and then looked to see if you can find the specific operations (look at e2elatency and serverlatency).
http://blogs.msdn.com/b/windowsazurestorage/archive/tags/analytics+2d00+logging+_2600_amp_3b00_+metrics/. You should also enable client logging and try to determine if the client is having connectivity problems and the retry logic is possibly kicking in.
And finally if none of these appear to help can you please send me the server logs (and ideally the client side logs as well) along with your account information (no passwords) to JAHOGG at Microsoft dot com.
Jason
Azure Service bus has a property in the BrokeredMessage class called ScheduledEnqueueTimeUtc, it allows you to set a time for when the message is added to the queue (effectively creating a delay).
Are you sure that in your code your not setting this property, and this might be the cause for the delay?
You can find more info on this at this url: https://www.amido.com/azure-service-bus-how-to-delay-a-message-being-sent-to-the-queue/
If you are using WebJobs to process messages from the queue, it can be due to WebJobs configuration.
From an MSDN forum post by pranav rastogi:
Starting with 0.4.0-beta, the (WebJobs) SDK implements a random exponential back-off algorithm. As a result of this if there are no messages on the queue, the SDK will back off and start polling less frequently.
The following setting allows you to configure this behavior.
MaxPollingInterval for when a queue remains empty, the longest period of time to wait before checking for a message to. Default is 10min.
static void Main()
{
JobHostConfiguration config = new JobHostConfiguration();
config.Queues.MaxPollingInterval = TimeSpan.FromMinutes(1);
JobHost host = new JobHost(config);
host.RunAndBlock();
}

JMS 2.0: Shared-Durable-Consumer on Topic vs Asynchronous-Consumer on Queue; Ref. Official GlassFish 4.0 docs/javaee-tutorial Java EE 7

Ref: Official GlassFish 4.0 docs/javaee-tutorial Java EE 7
Firstly, let us start with the destination-type of: topic.
As per GlassFish 4.0 tutorial, section “46.4 Writing High Performance and Scalable JMS Applications”:
This section describes how to use the JMS API to write applications
that can handle high volumes of messages robustly.
In the subsection “46.4.2 Using Shared Durable Subscriptions”:
The SharedDurableSubscriberExample.java client shows how to use shared
durable subscriptions. It shows how shared durable subscriptions
combine the advantages of durable subscriptions (the subscription
remains active when the client is not) with those of shared consumers
(the message load can be divided among multiple clients).
When we run this example as per “46.4.2.1 To Run the ShareDurableSubscriberExample and Producer Clients”, it gives us the same effect/functionality as previous example on destination-type of queue: if we follow “46.2.6.2 To Run the AsynchConsumer and Producer Clients”, points 5 onwards – and modify it slightly using 2 consumer terminal-windows and 1 producer terminal-window.
Yes, section “45.2.2.2 Publish/Subscribe Messaging Style” does mention:
The JMS API relaxes this requirement to some extent by allowing
applications to create durable subscriptions, which receive messages
sent while the consumers are not active. Durable subscriptions provide
the flexibility and reliability of queues but still allow clients to
send messages to many recipients.
.. and anyway section “46.4 Writing High Performance and Scalable ..” examples are queue style – one message per consumer:
Each message added to the topic subscription is received by only one
consumer, similarly to the way in which each message added to a queue
is received by only one consumer.
What is the precise technical answer for: why, in this example, the use of Shared-Durable-Consumer on Topic is supposed to be, and mentioned under, “High Performance and Scalable JMS Application” vs. use of Asynchronous-Consumer on Queue?
I was wonderign about the same issue, so I found out the following link. I understand that John Ament gave you the right reponse, maybe it was just too short to get a full understand.
Basically, when you create a topic you are assuming that only the subscribed consumers will receive its messages. However processing such a message may requires a heavy processing; in such a cases you can create a shared topic using as much threads as you want.
Why not use a queue? The answer is quite simple, if you use a queue only one consumer will be able to handle such a message.
In order to clarify I will give you an example. Let's say a federal court publishes thousand of sentences every day and you have three distinct applications that depends on it.
Application A just copy the sentences to a database.
Application B parse the sentence and try to find out all relation between people around all previously saved sentences.
Application C parse the sentence and try to find out all relation between companies around all previously saved sentences.
You could use a Topic for the sentences, where Application A, B and C would be subscribed. However it easy to see that Application A can process the message very quicly while Application B and C may take some time. An available solution would consist of create a shared subscription for application B and another one to application C, so multiple threads could act on each of them simultaneouly...
...Of course there are other solutions, you could for example use a unshared topic (i.e. a regular one) and post all received messages on a ArrayBlockingQueue that would be handled by a pool of threads some time later; howecer in such a decision the developer would be the one to worry about queue handling.
Hope this can help.
The idea is that you can have multiple readers on a subscription. This allows you to read more messages faster, assuming you have threads available.
JMS Queue :
queued messages are persisted
each message is guaranteed to be delivered once-and-only-once, even no consumer running when the messages are sent.
JMS Shared Subscription :
subscription could have zero to many consumers
if messages sent when there is no subscriber (durable or not), message will never be received.

Muliple Websphere Application Servers attached to a single Websphere MQ Failing

Issue:
Having multiple consumer applications active specification attached to a single MQ on distributed VM servers is causing a null payload in a MQ Message.
Note: See notes at the bottom. No issue with mq.
Details:
I have 3 Websphere applications deployed across 2 VM servers. 1 application is a publisher and the other 2 applications are consumers attached to a single MQ Manager, and MQ.
2 consumer applications are pulling off the messages and processing them. The consumer application on the separate server receives a null payload. I have confirmed that its seems to be an issue having multiple application server instances attached to MQ. Confirmed by deploying the publisher on server 2 with consumer 2, then consumer 1 fails.
Question:
Has anyone tried attaching multiple MDB applications deployed on separate server instances bind to one Queue Manager and one MQ?
Specifications:
Websphere 7, EJB 3.0 MDB's,Transactions turned off,Queue in a queue installed on another machine.
Goal:
Distributed computing, scaling up against large number of messages.
I'm thinking this is a configuration issue but not 100% sure where to look. I had read that you could use MQLink, but I don't see why I would need to use service bus integration.
Supporting Doucmentation:
[MQ Link][1
UPDATE: I fixed the problem and it was a related to a combination of class loader issue with a duplicate classes. See Solution Notes below I added.
EDIT HISTORY:
- Clarified specifications, clarified question and added overall goal.
- reference notes to solution.
Has anyone tried attaching multiple MDB applications deployed on
separate server instances bind to one Local MQ?
Multiple MDB applications deployed on separate servers, connecting to one Queue Manager(but different queue) is a normal scenario and we have it everywhere and all applications work fine.
But, I suspect what you are doing is: Multiple MDB applications deployed on separate servers, connecting to one Queue Manager and listening to the same queue.
In this case one message will be received by one consumer only.
You need to create separate queue for each application and create subscriptions for each for the topic being published by your publisher.
Addition:
I suspect, for load balancing the problem you may be facing is that, when your first application gets the message, it doesn't issue a commit. So, there will be an uncommited message in the queue, which may be stopping your other application from getting message from the queue. When your first application finishes its processing, it issues a commit, but then again it is ready to pick the message and hence it again issues a get.
In my architecture, we have implemented load balancing using multiple queue managers like below:
You create 3 queue managers, say GatewayQM, App1QM and App2QM.
Keep the three queue managers in the same cluster.
Create an alias queue(shared in cluster) in GatewayQM and ask your putting app to put message in the gateway queue.
Now create one local cluster queue in each of App1QM and App2QM. Read from these queues via your applications App1 and App2 respectively.
This implementation provides you better security and serves a perfect load balancer.
This specific problem was caused by a code issue and the combination of class loading being set to "Parent First" in the Websphere console. On one node it would work and the other nodes in a cluster would fail, I think this was caused by the "Parent First" setting.
More important, in terms of my configuration in a cluster binding multiple active specifications to a single MQ to provide distributed computing is a correct solution.
However "points" due go to "nitgeek" solution references above if you are looking for a extremely high volume solution. Its important to understand that a single MQ can have a very high depth and takes a lot to fully utilize one. My current configuration is a good starting point for quick configuration and distributed processing using Multiple MDB's.

Resources