I want to write a web application, wherein I want to send emails asynchronously.
I am planning to use JMS queue to put the request to send the emails.
The consumer will pick the messages and call the APIs to send the emails.
Another option is to use #Asynchronous annotation for sending the emails.
Which is a better option?
The SMTP server will have a queuing mechanism purpose-built for delivering email. Unless you need some particular feature of JMS, I would just use #Asynchronous. Otherwise, you're reinventing the wheel and potentially adding bugs to the process.
Unless and untill you are having a specific bean implementing the email logic don't use JMS queues to send emails asynchronously. Instead using #Asynchronous is a good option or implement the email logic in a new thread.
Refer to this post for more details
How to send email in java using asynchronous API
Related
I'm new to microservices architecture and want to create a centralised notification microservice to send emails/sms to users.
My first option was to create a notification Kafka queue where all other microservices can send notifications to. The notification microservice would then listen to this queue and send messages accordingly. If the notification service was restarted or taken down, we would not lose any messages as the messages will be stored on the queue.
My second option was to add a notification message API on the notifications microservice. This would make it easier for all other microservices as they just have to call an API as opposed to integrate with the queue. The API would then internally send the message to the notification Kafka queue and send the message. The only issue here is if the API is not available or there is an error, we will lose messages.
Any recommendations on the best way to handle this?
Either works. Some concepts that might help you decide:
A service that fronts "Kafka" would be helpful to:
Hide the implementation. This gives you the flexibility to change Kafka out later for something else. Your wrapper API would only respond with a 200 once it has put the notification request on the queue. I also see giving services direct access to "your" queue similar to allowing services to directly interact with a database they don't own. If you allow direct-access to Kafka and Kafka proves to be inadequate, a change to Kafka will require all of your clients to change their code.
Enforce the notification request contract (ensure the body of the request is well-formed). If you want to make sure that all of the items put on the queue are well-formed according to contract, an API can help enforce that. That will help prevent issues later when the "notifier" service picks notifications off the queue to send.
Adding a wrapper API would be less desirable if:
You don't want to/can't spend the time. Maybe deadlines are driving you to hurry and the days it would take to stand up a wrapper is just too much.
You are a small team and you don't have the resources/tools/time for service-explosion.
Your first design is simple and will work. If you're looking for the advantages I outlined, then consider your second design. And, to make sure I understand it, I would see it unfold like:
Client 1 needs to put out a notification and calls Service A POST /notifications
Service A that accepts POST /notifications
Service A checks the request, puts it on Kafka, responds to client with 200
Service B picks up notification request from Kafka queue.
Service A should be run as multiple instances for reliability.
I'm using Spring Boot 2.0.7 and Spring-Kafka to create a request/reply pattern. Basically the frontend UI makes a request to an API which puts a message on to a request Kafka queue, the message is processed by a backend process and when complete a message is put onto a reply queue.
I want to provide the frontend UI an api which waits until the response is ready. The UI in this time will just show a processing message. If the response is not available (e.g. after 2 minutes), the API should just return a message not available error where we can instruct the user to come back later.
I'm a bit new to Spring-Kafka. Does it allow me to create a polling API? If so, any example code would be very much appreciated.
It's not as simple as polling a topic for a reply because you have to correlate requests/replies.
You can use ReplyingKafkaTemplate.sendAndReceive() and keep checking the isDone() method on the Future<?>.
If you want to poll yourself, you would have to create a consumer object from the consumer factory.
I am new to Microservices and have a question with RabbitMQ / EasyNetQ.
I am sending messages from one microservice to another microservice.
Each Microservice are Web API's. I am using CQRS where my Command Handler would consume message off the Queue and do some business logic. In order to call the handler, it will need to make a request to the API method.
I would like to know without having to explicit call the API endpoint to hit the code for consuming messages. Is there an automated way of doing it without having to call the API endpoint ?
Suggestion could be creating a separate solution which would be a Console App that will execute the RabbitMQ in order to start listening. Create a while loop to read messages, then call the web api endpoint to handle business logic every time a new message is sent to the queue.
My aim is to create a listener or a startup task where once messages are in the queue it will automatically pick it up from the Queue and continue with command handler but not sure how to do the "Automatic" way as i describe it. I was thinking to utilise Azure Webjob that will continuously be running and it will act as the Consumer.
Looking for a good architectural way of doing it.
Programming language being used is C#
Much Appreciated
The recommended way of hosting RabbitMQ subscriber is by writing a windows service using something like topshelf library and subscribe to bus events inside that service on its start. We did that in multiple projects with no issues.
If you are using Azure, the best place to host RabbitMQ subscriber is in a "Worker Role".
I am using CQRS where my Command Handler would consume message off
the Queue and do some business logic. In order to call the handler, it
will need to make a request to the API method.
Are you sure this is real CQRS? CQRS occures when you handle queries and commands differently in your domain logic. Receiving a message via a calss, that's called CommandHandler and just reacting to it is not yet CQRS.
My aim is to create a listener or a startup task where once messages
are in the queue it will automatically pick it up from the Queue and
continue with command handler but not sure how to do the "Automatic"
way as i describe it. I was thinking to utilise Azure Webjob that will
continuously be running and it will act as the Consumer. Looking for
a good architectural way of doing it.
The easier you do that, the better. Don't go searching for complex solutions until you tried out all the simple ones. When I was implementing something similar, I was just running a pool of message handler scripts using Linux cron. A handler poped a message off the queue, processed it and terminated. Simple.
I think using the CQRS pattern, you will have events as well and corresponding event handlers. As you are using RabbitMQ for asynchronous communication between command and query then any message put on specific channel on RabbitMQ, can be listened by a callback method
Receiving messages from the queue is more complex. It works by subscribing a callback function to a queue. Whenever we receive a message, this callback function is called by the Pika library.
I have list of emails (to many). I want to write scheduler, which sends emails periodically.
I read emails from database
I send messages to this emails.
As I see for good performance, it is good to use JMS (Topic) to make this.
In documentation I read that Topic sends messages to all clients. Could you enplane me what does "client" means at this case? In my opinion, wich my example, they are the owner of the emails and my system will send the message text to this owner of email (Clients). Is it right?
No, in this context, "all clients" means all java processes that have an open subscription to the topic.
You would need to write code to convert from JMS to Email (and send it). Frameworks like Spring Integration can be used for this, it does all the heavy lifting for you; you would simply wire a JMS message-driven-channel-adapter to receive the message from a queue (not a topic), do a JDBC query to get the emails, then send them via a mail outbound-channel-adapter.
Read the project documentation for more information (there's a link to it from the project page link above).
I'm trying to understand the principles of HornetQ as well as core/JMS messaging using this solution.
In my experimental app, I'd like my end-user application(client) to send messages to a HornetQ which will be read by a backend app. So far this is no problem and I love HornetQ.
But now, i'd like to send an "answer" message from the backend app back to the end-user. For this, I have the condition that no other client app should be able to read the answer message (let's say it contains the current bank balance). So user A should only fetch messages for himself and the same applies to any other user.
Is this possible using HornetQ? If so, how do I have to do it?
with hornetq (or any other message system) you always send to a queue, not to a specific consumer.
ON this case you have to create a queue matching your client.
This answer here will provide you some feedback on request-response where I won't need to repeat myself after this approach:
Synchronous request-reply pattern in a Java EE container