I am implementing an event-driven microservice architecture. Imagine the following scenario:
Chat service: Ability to see conversations and send messages. Conversations can have multiple participants.
Registration-login service: Deals with the registration of new users, and login.
User service: Getting/updating user profiles.
The registration-login service emits the following event with the newly created user object:
registration-new
login-success
logout-success
The chat service then listens on registration-new and stores some fields of user in its own redis cache. It also listens on login-success and stores the token, and on logout-success to delete the token.
The user service has the following event: user-updated. When this is fired, a listener in the chat service updates the data corresponding to the user id in redis. Like the chat service, the user service also listens on login-success and logout-success and does the same thing as what the chat service does.
My question is the following: is this a good way to do this? It feels a bit counterintuitive to be sharing data everywhere. I need some advice on this. Thank you!
Seems that there's no other way. Microservices architecture puts lots of stress in avoiding data sharing so as to not create dependencies. That means that each microservice will have some data duplicated. That also means that there must exist a way of getting data from other contexts. The preferred methods strive for eventual consistency, such as sending messages to event sourcing or AMQP systems and subscribing to them. You can also use synchronous methods (RPC calls, distributed transactions). That creates additional technologic dependencies, but if you cannot accept eventual consistency it could be the only way.
Related
Say I design a microservice architecture for a system of payment and a user one click the pay button in its UI screen send request for an API gateway and the gateway start chain of api calls to the few microservices. how to handle to case when one of the micorservices is down or not responding in the middle of the chain call?
I want the user to think that his payment has been successful and no to return him "try again later" can I save the state of the chain somewhere?
For such cases using asynchronous communication is more preferable rather than synchronous communication.
In this case;
When client sends a request to the system, API gateway takes the request and delegates to corresponding microservice. After that this microservice sends an event to other related microservice. Generally a message broker is used for this, messages are stored in broker and even the consumer (subcsriber) microservice is down, the message would not be lost.
You can also send events directly from api gateway to message broker. (see: https://microservices.io/patterns/apigateway.html)
For achiving atomicity and consistency (eventual consistency of course) SAGA pattern can be applied. You can check this for more information.
But if your requirement is calling a microservice for getting some data immediately and if this required microservice is down, then this solution would not work. You should avoid this kind of coupling between microservices by design. In my opinion, this is one of the most challenging part in microservices architecture. Domain Driven Design techniques can be used to determine bounded contexts.
My first attempt to implement a microservice architecture using events with Kafka.
I have problems finding out how can I check for user credentials in a event.
My application is simple:
a service that controls users with email and passwords, able to create, edit and delete them.
a service that sends emails from those users.
My idea is to call create an event with a json like.
{
"status":"sendEmail",
"message":{
"sender":"abc#zxy.com",
"password":"123456",
"recipient":"jkl#asd.com",
"content":"this is my emails body"
}
}
Once I create this event at the second service, how can I validate with event that the user exist in the first service? I could easily do this wiht a REST communication but I would like to find out how to communicate responses between services with events messages.
Thanks.
You would need to either cache all user accounts in the second service (by consuming all user topic records), or perform an external lookup upon consuming email records. Messaging and RESTful services aren't necessarily exclusive.
FWIW, at least encrypt passwords before sending over unsecured/plaintext topics
I'm new to microservices architecture and want to create a centralised notification microservice to send emails/sms to users.
My first option was to create a notification Kafka queue where all other microservices can send notifications to. The notification microservice would then listen to this queue and send messages accordingly. If the notification service was restarted or taken down, we would not lose any messages as the messages will be stored on the queue.
My second option was to add a notification message API on the notifications microservice. This would make it easier for all other microservices as they just have to call an API as opposed to integrate with the queue. The API would then internally send the message to the notification Kafka queue and send the message. The only issue here is if the API is not available or there is an error, we will lose messages.
Any recommendations on the best way to handle this?
Either works. Some concepts that might help you decide:
A service that fronts "Kafka" would be helpful to:
Hide the implementation. This gives you the flexibility to change Kafka out later for something else. Your wrapper API would only respond with a 200 once it has put the notification request on the queue. I also see giving services direct access to "your" queue similar to allowing services to directly interact with a database they don't own. If you allow direct-access to Kafka and Kafka proves to be inadequate, a change to Kafka will require all of your clients to change their code.
Enforce the notification request contract (ensure the body of the request is well-formed). If you want to make sure that all of the items put on the queue are well-formed according to contract, an API can help enforce that. That will help prevent issues later when the "notifier" service picks notifications off the queue to send.
Adding a wrapper API would be less desirable if:
You don't want to/can't spend the time. Maybe deadlines are driving you to hurry and the days it would take to stand up a wrapper is just too much.
You are a small team and you don't have the resources/tools/time for service-explosion.
Your first design is simple and will work. If you're looking for the advantages I outlined, then consider your second design. And, to make sure I understand it, I would see it unfold like:
Client 1 needs to put out a notification and calls Service A POST /notifications
Service A that accepts POST /notifications
Service A checks the request, puts it on Kafka, responds to client with 200
Service B picks up notification request from Kafka queue.
Service A should be run as multiple instances for reliability.
Martin Fowler's description of the Event Collaboration pattern (https://martinfowler.com/eaaDev/EventCollaboration.html) appears to imply that requisite external data (data from other services) that is needed for a service to function should be replicated and maintained within the service.
This seems to imply that we should not resort issuing explicit queries.
For example:
Say you have a communications service that is responsible for sending emails to clients and is dependent order information (that lives in the order service) to send an order confirmation email.
With Event Collaboration, the communications service will have some internal representation of all orders that it will have built up by consuming relevant order creation/modification events.
In this example a query to retrieve order details will not be necessary to generate the confirmation email.
Are there any instances in which we would use explicit query messages rather than data replication when adopting the Event Collaboration pattern?
i think even in this case, what i would have done is create a consumer of OrderPlaced event in Order Microservice Only. That event processor will read all the details from order create a MailToBeSent event and write it on a Topic or Queue , which CommunicationService should listen and send the email.
Communication Service should not understand , how to create a email based on order(as core purpose of cummunication service is to send emails).
Design wise also communication service should not require to change every time you add a new service which want a mail sending functionality.
The Problem
We are currently architecting our new Notification Microservice but having trouble with how to handle aggregated emails. What we need to do is instead of sending one email every action performed (could be 20+ in a few minutes), we would send an email after an hour summarising all the actions that were completed.
What We Have So Far
We so far propose that we have this type of messaging pattern, where Client Service is any service in our cluster and Messagebot is our Notification Microservice.
Client Service sends a notification to Messagebot that it will need to send something in the future
Messagebot stores the details in its database
Messagebot periodically checks its database for what needs to be sent
Messagebot gets the required data from another service (could be Client Service) via API
Messagebot sends email using the data from #3 and an HTML template
The Debate
For the data that needs to be sent, we are less sure and it is what we need help with. So far we think this should be the structure of the JSON from Client Service to Notification Service (step #1):
{
template_id: SOME_TEMPLATE_ID,
user_id: SOME_USER_ID,
objectid: SOME_OBJECT_ID
}
or
{
template_id: SOME_TEMPLATE_ID,
user_id: SOME_USER_ID,
required_objects: { task_id: SOME_TASK_ID, document_id: SOME_DOCUMENT_ID }
}
Where task_id and document_id are just examples and it would change based on the template. It could just as easily be {product_id: SOME_PRODUCT_ID} for a different template.
Why The Debate
Our thoughts so far are that:
We only need template_id because the source of the data would be implied in the objects (like an ENV var). For example, the Task object would be at http://taskservice/:id. Otherwise, we can have problems with failing APIs or switching URLs in the future.
We should use userid instead of email and name because we prevent the issue of email/ name pairs not matching up over multiple messages
For the objects, we're still sceptical because it means that the client app service would need knowledge of the inner workings in Messagebot but a single objectid might not be very extensible. We could easily imagine many of our messages needing more than one object.
In Conclusion
Thank you for reading. The design of this service is important because it will be central to our entire organisation.
Which debated JSON structure is most appropriate in our situation? Also, knowing our requirements, what would be the proper setup for this type of service? (aka. Are we correct in our other assumptions?)
So your messagebot will
store notifications
get data from other services
compile emails from the data and
send the compiled emails
In my opinion, your messagebot were given too many tasks. If I were designing the system, I would like to keep the messagebot simpler. The servces should encapsulate the knowledge to compile the email, e.g. manage it's own template and so on. The services will push the compiled emails to a queue so the messagebot can pick up and send. The only logic in the messagebot is to pick up the emails from the queue and send. In this way, it doesn't matter how many more services you are going to have in the future, the messagebot will stay nice and simple.