Springboot AWS SQS application running on 2 servers - spring-boot

We are going to work on springboot application which will be deployed on two ECS containers to support the cluster environment. This application will accept the request and drop message into SQS. Another flow in the application should pick the message from queue and process it. As same application will be running on two different servers in cluster environment, I am not sure which server will pick the message from queue. How can I make sure that only one server picks up the message from queue. It could be either server.

Ordinary SQS queues do not even guarantee that a message only appears once on the queue - see AWS Standard SQS Queue docs
Using a reasonable value for visibility timeout, the time that a message can’t be seen by other consumers, vs the time it takes to consume a message should solve it.
Alternatively you can use an SQS FIFO queue but it’s much slower and can, in my experience, get stuck on a corrupt message.

Related

Send message to consumer when connected to ActiveMQ

I have multiple instances of a worker connected to a queue and all requests will be distributed to worker instances in a load balanced way. When a new worker instance is connected to the queue, I should dump a small data from mainstream app to this new worker instance (one time job).
Currently I'm using REST endpoint from mainstream app for doing this at application start-up but can we leverage the messaging queue for this? Once a new worker instance connected to queue, it will ask the initial data dump to mainstream app through queue and then app will reply with initial data.
Is it possible using messaging queue/topic? Kindly share your views/suggestions to achieve this using activemq
If you're using ActiveMQ Artemis this kind of requirement is typically fulfilled with a queue that supports both non-destructive and last-value semantics. The last-value semantics allows the queue to stay up-to-date with the latest messages and the non-destructive semantics means that even when consumers acknowledge the messages they will remain on the queue for the next client which connects. When using this combination clients can first consume all the messages from this special "initialization" queue and then continue on with whatever other messaging work they need to do.
Unfortunately ActiveMQ "Classic" doesn't support either of these semantics and there is no straight-forward way to get equivalent behavior.

How to clear messages in IBM MQ which are stuck for more than 5 mins?

I don't want to use message expiry as it has dependency on sending application and don't want to use pub/sub as well because if the applications don't take the messages it will fill up the filesystem etc. I don't want the messages to be piled up in the queue because application is down.
This setup is required so that there wont be any outage because of this queue and the application consuming it. Any advice?
CAPEXPRY allows the administrator to set message expiry without application changes. See https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.1.0/com.ibm.mq.ref.dev.doc/q097495_.htm

Load balance Kafka consumer multiple instances

I have a consumer that reads and writes messages to a time-series database. We have multiple instances of the time series database running as a cluster on multiple physical machines.
Our plan is to deploy the consumer on Kubernetes so I can scale if I need more instance with load-balance they all point to a single time series service that is running.
Now I getting an Issue where it's come to my mind that if I have 5 instances which consume the same topic then they work individually means( they all get message payload and save like any one instance is doing )
What we want is
we want if one consumer is busy then it will go to the next free instance but not be subscribed to by all instance running. To scale or load-balance means I want like normal load-balancing application or how spring-boot app works when you scale on Kubernetes
so is there any way to make it like a load-balancing consumer and processing only one, even consume by 1st or 2nd or 3rd like normal app work as loadbanlacer?
if anyone has ideas about this, how it going to behave and what kind of output we are going to get if doing this with Kafka Spring boot application?

Allow rabbitmq to process current running message before shutdown

My application is spring boot micro service listening to a Rabbit MQ queue.
The queue receives messages from different sources.
The requirement is that when the application server is going down (this could happen because of many reasons, may be because we brought the site down, or we are deploying an updated software on to our application server) we would like the queue to process the current message. As of now, we lose the message that the queue is currently processing.
How can I achieve this?
The default shutdownTimeout is 5000ms; you can increase it.
You should not, however, lose any messages, it should be requeued (unless you are using AcknowledgeMode.NONE (which is generally a bad idea).

Muliple Websphere Application Servers attached to a single Websphere MQ Failing

Issue:
Having multiple consumer applications active specification attached to a single MQ on distributed VM servers is causing a null payload in a MQ Message.
Note: See notes at the bottom. No issue with mq.
Details:
I have 3 Websphere applications deployed across 2 VM servers. 1 application is a publisher and the other 2 applications are consumers attached to a single MQ Manager, and MQ.
2 consumer applications are pulling off the messages and processing them. The consumer application on the separate server receives a null payload. I have confirmed that its seems to be an issue having multiple application server instances attached to MQ. Confirmed by deploying the publisher on server 2 with consumer 2, then consumer 1 fails.
Question:
Has anyone tried attaching multiple MDB applications deployed on separate server instances bind to one Queue Manager and one MQ?
Specifications:
Websphere 7, EJB 3.0 MDB's,Transactions turned off,Queue in a queue installed on another machine.
Goal:
Distributed computing, scaling up against large number of messages.
I'm thinking this is a configuration issue but not 100% sure where to look. I had read that you could use MQLink, but I don't see why I would need to use service bus integration.
Supporting Doucmentation:
[MQ Link][1
UPDATE: I fixed the problem and it was a related to a combination of class loader issue with a duplicate classes. See Solution Notes below I added.
EDIT HISTORY:
- Clarified specifications, clarified question and added overall goal.
- reference notes to solution.
Has anyone tried attaching multiple MDB applications deployed on
separate server instances bind to one Local MQ?
Multiple MDB applications deployed on separate servers, connecting to one Queue Manager(but different queue) is a normal scenario and we have it everywhere and all applications work fine.
But, I suspect what you are doing is: Multiple MDB applications deployed on separate servers, connecting to one Queue Manager and listening to the same queue.
In this case one message will be received by one consumer only.
You need to create separate queue for each application and create subscriptions for each for the topic being published by your publisher.
Addition:
I suspect, for load balancing the problem you may be facing is that, when your first application gets the message, it doesn't issue a commit. So, there will be an uncommited message in the queue, which may be stopping your other application from getting message from the queue. When your first application finishes its processing, it issues a commit, but then again it is ready to pick the message and hence it again issues a get.
In my architecture, we have implemented load balancing using multiple queue managers like below:
You create 3 queue managers, say GatewayQM, App1QM and App2QM.
Keep the three queue managers in the same cluster.
Create an alias queue(shared in cluster) in GatewayQM and ask your putting app to put message in the gateway queue.
Now create one local cluster queue in each of App1QM and App2QM. Read from these queues via your applications App1 and App2 respectively.
This implementation provides you better security and serves a perfect load balancer.
This specific problem was caused by a code issue and the combination of class loading being set to "Parent First" in the Websphere console. On one node it would work and the other nodes in a cluster would fail, I think this was caused by the "Parent First" setting.
More important, in terms of my configuration in a cluster binding multiple active specifications to a single MQ to provide distributed computing is a correct solution.
However "points" due go to "nitgeek" solution references above if you are looking for a extremely high volume solution. Its important to understand that a single MQ can have a very high depth and takes a lot to fully utilize one. My current configuration is a good starting point for quick configuration and distributed processing using Multiple MDB's.

Resources