Usage of JMS to call a API which delivers a message - jms

I would like to know if using a JMS in the below scenario is feasible or not.
I am adding a feature of calling an API service which will dispatch the emails to the customer.
So i thought of implementing a JMS in my application where i would put the events or messages in the queue and write a listener in the same application which will process the message and call the rest API service call which will dispatch the message to the customers.
My question was is it good to have a JMS in between the rest call and our application ?
Or should i directly call the rest api to dispatch the messages to the customer ?

I think that depends on the availability and overhead of your rest service.
If you know there will be times that your service will be down, but don't want to impact the process using the API, then JMS queues make since.
Or if you feel the rest service is causing a bottle neck from the API service side and want to queue up the messages somewhere where they can survive an outage of your own, JMS with a provider that supports persistent messages makes since in this case.
Using JMS would also open the door for completely decoupling the two. Whatever application hosts the rest service could just as easily be converted to pull messages from the JMS queue without a need to make a rest call if that seemed more efficient.
Just a few examples of how you could justify using JMS in this scenario.

Related

Event Driven Architecture on Multi-instance services

We are using microservice architecture in our project. We deploy each service to a cluster by using Kubernetes. Services are developed by using Java programming language and Spring Boot framework.Three replicas exist for each service. Services communicate with each other using only events. RabbitMQ is used as a message queue. One of the services is used to send an email. The details of an email are provided by another service with an event. When a SendingEmail event is published by a service, three replicas of email service consume the event and the same email is sent three times.
How can I prevent that sending emails by other two services?
I think it depends on how you work with Rabbit MQ.
You can configure the rabbit mq with one queue for these events and make spring boot applications that represent the sending servers to be "Competing" Consumers.
If you configure it like this, only one replica will get an event at a time and only if it fails to process it the message will return to the queue and will become available to other consumers.
From what you've described all of them are getting the message, so it works like a pub-sub (which is also a possible way of work with rabbit mq, its just not good in this case).

REST API command with event driven choreography

I'm trying to design a system in an event-driven architecture style, trying also to expose REST API to send commands/queries. I decided to use Kafka as a message broker.
The choreography I'm trying to design is the following:
The part that is very obscure to me is how to implement event joins:
billing-service should start creating the user only when it receives the user creation event (1) and the account has been created (2)
api-gateway should return the result to the client only when both account and billing service have finished their processing (2 and 3)
I know I could use other protocols on the client side (e.g. WebSockets) but I prefer not doing that because I will need to expose such API to 3rd party. I could also do an async client call and poll to check if the request has been completed but it appears very complex to manage.
What is the suggested way of implementing such an interaction?
p.s. I'm using Spring Boot and Spring Cloud Stream.
Request/reply messaging on the client side is possible with spring-cloud-stream, but it's a bit involved because it wasn't designed for that, it's intended for unidirectional stream processing.
You would be better off using spring-kafka (ReplyingKafkaTemplate) or spring-integration-kafka (Outbound Gateway) for request/reply on the client side.
On the service side you can use a #StreamListener (spring-cloud-stream) or a #KafkaListener or a spring-integration inbound-gateway.

How should you handle the retry of sending a JMS message from your application to ActiveMQ if the ActiveMQ server is down?

So using JMS and ActiveMQ, I can be sure that my message sent from my Spring Boot application using JmsTemplate will reach it's destination application even if that destination application is down at the time I send the message to ActiveMQ. As when the destination application starts up, it grabs the message from the queue. Great!
However.
What happens if my Spring Boot application tries to send a JMS message to a queue on the ActiveMQ server, but the ActiveMQ server is down at that point or the network is down and I get a connection refused exception?
What is the recommended way to make sure my application keeps trying to re-sends the message to ActiveMQ until it is successful? Is this something I have to develop into my application myself? Are there any nifty Spring tools or annotations which do this for me? Any advice on best practice or how I should be handling this scenario?
You can try Spring-Retry. Has lots of fine grain controls for it:
http://www.baeldung.com/spring-retry
https://github.com/spring-projects/spring-retry
If it is critical that you don't lose this message, you will want to save it to some alternative persistent store (e.g. filesystem, local mq server) along with whatever retry code you come up with. But for those occasional network glitches or a very temporary mq shutdown/restart, Spring-Retry alone should do the trick.
Couple of approaches I can think of
1. You can set up another ActiveMq as fallback. In your code you don't have to do anything, just change your broker url from
activemq.broker.url=tcp://amq01.blah.blah.com:61616
to
activemq.broker.url=failover:(tcp://amq01.blah.blah.com:61616,tcp://amq02.blah.blah.com:61616)?randomize=false
The rest is automatically taken care of. i.e. when one of them is down, the messages are sent to other.
Another approach is to send to a internal queue (like seda, direct) when activemq is down and read from there.
Adding failover to the url is one appropriate way.
And another reasonable way is to making sure activemq always online , as activemq has the master-slave mode(http://activemq.apache.org/masterslave.html) to get high availability.

Spring without views

I am new to spring and so not sure if what I intend to do is possible.
I need to create an asynchronous webservice and a worker server (broker), both using the model & controller aspects of spring.
The webservice needs to send it's client's requests on to the broker via JMS and then instantly send a response back to the client indicating the request has been queued.
The broker is intended to remain live, processing messages from multiple webservice instances and sending back the results via an output JMS queue. The reason the broker needs to remain live is because the work to process each webservice message involves calling other webservices, some of which may be asynchronous and which may take a lot of time to process.
Additionally I do not want to spawn multiple instances of the broker as it is designed to handle multiple concurrent messages.
Is it possible to create both the webservice and broker within the same spring project, with both running in a web container such as tomcat or do I need to code them in separate projects, with perhaps the broker as a traditional standalone server rather than a web container servlet?
If so could someone point me in the right direction to creating a stay-alive broker within spring/tomcat.
I understand the webservice and JMS side of things, so do not need any help with that.

How to programatically defer JMS topic message consumption using Spring

I have an application that consumes messages from a JMS topic. As part of the normal application flow it needs to periodically cease consumption of messages. While the application is in this state new messages are stored in the topic (note that my application is still running). Later the application resumes message consumption, also receiving those messages that were placed on the topic while the application wasn't listening.
This functionality is currently achieved by creating and disposing of connections from a ConnectionFactory. However, I now wish to migrate the application to Spring JMS. Although Spring rather neatly abstracts away much of the JMS boiler-plate - I no longer appear to have fine grained control over the underlying connection and hence cannot halt message consumption on demand.
Before I try to wade through Spring JMS internals, can anyone suggest a neat way of doing this?
Can you just avoid returning from onMessage()? How long do you want to stop consumption? Is your problem similar to https://stackoverflow.com/a/628337/20734

Resources