My first attempt to implement a microservice architecture using events with Kafka.
I have problems finding out how can I check for user credentials in a event.
My application is simple:
a service that controls users with email and passwords, able to create, edit and delete them.
a service that sends emails from those users.
My idea is to call create an event with a json like.
{
"status":"sendEmail",
"message":{
"sender":"abc#zxy.com",
"password":"123456",
"recipient":"jkl#asd.com",
"content":"this is my emails body"
}
}
Once I create this event at the second service, how can I validate with event that the user exist in the first service? I could easily do this wiht a REST communication but I would like to find out how to communicate responses between services with events messages.
Thanks.
You would need to either cache all user accounts in the second service (by consuming all user topic records), or perform an external lookup upon consuming email records. Messaging and RESTful services aren't necessarily exclusive.
FWIW, at least encrypt passwords before sending over unsecured/plaintext topics
Related
Martin Fowler's description of the Event Collaboration pattern (https://martinfowler.com/eaaDev/EventCollaboration.html) appears to imply that requisite external data (data from other services) that is needed for a service to function should be replicated and maintained within the service.
This seems to imply that we should not resort issuing explicit queries.
For example:
Say you have a communications service that is responsible for sending emails to clients and is dependent order information (that lives in the order service) to send an order confirmation email.
With Event Collaboration, the communications service will have some internal representation of all orders that it will have built up by consuming relevant order creation/modification events.
In this example a query to retrieve order details will not be necessary to generate the confirmation email.
Are there any instances in which we would use explicit query messages rather than data replication when adopting the Event Collaboration pattern?
i think even in this case, what i would have done is create a consumer of OrderPlaced event in Order Microservice Only. That event processor will read all the details from order create a MailToBeSent event and write it on a Topic or Queue , which CommunicationService should listen and send the email.
Communication Service should not understand , how to create a email based on order(as core purpose of cummunication service is to send emails).
Design wise also communication service should not require to change every time you add a new service which want a mail sending functionality.
In a microservice architecture for a hotel I want to create a communication service that will handle all the emails, sms, ... This service should be triggered by asynchronous events.
Should these events be called: SEND_RESERVATION_CONFIRMATION_EMAIL, making the reservation service aware of the email communication. Or should there be a more generic event RESERVATION_CONFIRMED, resulting in a confirmation email?
Should these events be called: SEND_RESERVATION_CONFIRMATION_EMAIL
No. Events should be named as sentences in the past.
making the reservation service aware of the email communication
I would not make that coupling. The reservation service is responsible with reservations, not with methods of notifying the customers.
Or should there be a more generic event RESERVATION_CONFIRMED, resulting in a confirmation email?
Yes, RESERVATION_CONFIRMED seems a good choice; it represent what really had happened and it does not contain indication of what should be done next. The workflow/process of notifying the customer should be managed by another component, i.e. a Saga/Process manager. This Saga would receive the RESERVATION_CONFIRMED event and then would send SEND_RESERVATION_CONFIRMATION_EMAIL command to the responsible microservice.
The Problem
We are currently architecting our new Notification Microservice but having trouble with how to handle aggregated emails. What we need to do is instead of sending one email every action performed (could be 20+ in a few minutes), we would send an email after an hour summarising all the actions that were completed.
What We Have So Far
We so far propose that we have this type of messaging pattern, where Client Service is any service in our cluster and Messagebot is our Notification Microservice.
Client Service sends a notification to Messagebot that it will need to send something in the future
Messagebot stores the details in its database
Messagebot periodically checks its database for what needs to be sent
Messagebot gets the required data from another service (could be Client Service) via API
Messagebot sends email using the data from #3 and an HTML template
The Debate
For the data that needs to be sent, we are less sure and it is what we need help with. So far we think this should be the structure of the JSON from Client Service to Notification Service (step #1):
{
template_id: SOME_TEMPLATE_ID,
user_id: SOME_USER_ID,
objectid: SOME_OBJECT_ID
}
or
{
template_id: SOME_TEMPLATE_ID,
user_id: SOME_USER_ID,
required_objects: { task_id: SOME_TASK_ID, document_id: SOME_DOCUMENT_ID }
}
Where task_id and document_id are just examples and it would change based on the template. It could just as easily be {product_id: SOME_PRODUCT_ID} for a different template.
Why The Debate
Our thoughts so far are that:
We only need template_id because the source of the data would be implied in the objects (like an ENV var). For example, the Task object would be at http://taskservice/:id. Otherwise, we can have problems with failing APIs or switching URLs in the future.
We should use userid instead of email and name because we prevent the issue of email/ name pairs not matching up over multiple messages
For the objects, we're still sceptical because it means that the client app service would need knowledge of the inner workings in Messagebot but a single objectid might not be very extensible. We could easily imagine many of our messages needing more than one object.
In Conclusion
Thank you for reading. The design of this service is important because it will be central to our entire organisation.
Which debated JSON structure is most appropriate in our situation? Also, knowing our requirements, what would be the proper setup for this type of service? (aka. Are we correct in our other assumptions?)
So your messagebot will
store notifications
get data from other services
compile emails from the data and
send the compiled emails
In my opinion, your messagebot were given too many tasks. If I were designing the system, I would like to keep the messagebot simpler. The servces should encapsulate the knowledge to compile the email, e.g. manage it's own template and so on. The services will push the compiled emails to a queue so the messagebot can pick up and send. The only logic in the messagebot is to pick up the emails from the queue and send. In this way, it doesn't matter how many more services you are going to have in the future, the messagebot will stay nice and simple.
I tried to use SNS as platform to post http messages to clients, but it have 2 major problems.
i can't send the subscribers id's / endpoints dynamically. i must create a topic for every combination, but the combinations change every time according to specific message parameters which change very often.
trying to make a work around the 1 issue, i tried to create a service which will generate the topics run-time, but even when i create new topic i need confirmation from the client after adding him to the subscribers considering this happens pretty often i can't expect clients to confirm being added endlessly which creates an issue even so.
can anyone suggest alternative service which uses http to publish the messages?
Don't use an SNS subscription model and just create endpoints in the SNS application as the users register/login your app.
You will have to store on the back end a mapping of the users account to the endpoint ARN.
FYI, any one user can have many endpoints and some may be invalid.
I am implementing an event-driven microservice architecture. Imagine the following scenario:
Chat service: Ability to see conversations and send messages. Conversations can have multiple participants.
Registration-login service: Deals with the registration of new users, and login.
User service: Getting/updating user profiles.
The registration-login service emits the following event with the newly created user object:
registration-new
login-success
logout-success
The chat service then listens on registration-new and stores some fields of user in its own redis cache. It also listens on login-success and stores the token, and on logout-success to delete the token.
The user service has the following event: user-updated. When this is fired, a listener in the chat service updates the data corresponding to the user id in redis. Like the chat service, the user service also listens on login-success and logout-success and does the same thing as what the chat service does.
My question is the following: is this a good way to do this? It feels a bit counterintuitive to be sharing data everywhere. I need some advice on this. Thank you!
Seems that there's no other way. Microservices architecture puts lots of stress in avoiding data sharing so as to not create dependencies. That means that each microservice will have some data duplicated. That also means that there must exist a way of getting data from other contexts. The preferred methods strive for eventual consistency, such as sending messages to event sourcing or AMQP systems and subscribing to them. You can also use synchronous methods (RPC calls, distributed transactions). That creates additional technologic dependencies, but if you cannot accept eventual consistency it could be the only way.