Microservices : security and architectural issue for internal services - spring

I m building a spring boot microservices, and i have some questions
I have an account microservice, a payment microservice, a product microservices... in these microservices, some requests sometimes need to use a mailing api, an sms sending api, or a push notification api..
What i have done now is create a microservice for mailing, microservice for sending sms and microservice for push notification.
What i can't seem to solve is how to make these microservices used only internally. for example, forbid users to directly call the mailing microservice.
before creating this question on stackoverflow, i dud myself, why i'll not put the code for sending sms in a library, and the same for sending emails and push notifications and add them to the microservice .. and when a microservice has need to use one of these apis i add the needed library .. for example i create a push notification library, and i add it to each microservice that needs to do a push notification ..
what is the best approach to integrate these mailing, sms and notification services into my microservice project, and respecting security by forbidding users to use them directly
I don't know what to do, can someone advise me?

Well it is not exactly clear to me what do you mean by "forbidding users to use them directly" but usually as it is pointed out #kavhakaran's answer you should put the security measures to prevent your services from abuses.
In that answer only network related part is focused as far as I can see. There should also be a second level which is about user authorization. That means you can/should have proper roles and authorization definitions for the services you would like to secure. And based on provided roles you can authorize the client to use the services.
That is how it works for cloud services usually as well. You will be provided an api-key in order to consume some cloud service and they will check if the api-key is authorized for the requested service etc.

You shouldn't worry about other micro services calling the mailing microservice or sms microservice in the application code. If you think about this concern, this will apply to any internal mircoservice. This concern can be handled in infrastructure level
Let me give you an example, you have a database running somewhere, does your microservice does anything to make sure, it is the only one talking to that database. The answer is no. At infrastructure level, whatever cloud infrastructure you are using, they allow to define security rules/ network policies, that lets you define who can talk to who. ie. rules for incoming traffic and rules for outgoing traffic
If they are public facing microservices, that is a different question. These are internal services
Some examples based on infrastructure
AWS SecurityGroups
AWS subnets
Kubernetes Network Policy
And also I want to add a point which may not be directly related to your question. The services in question seems to be very good candidates as asynchronous services. Then no services talk to them directly, sending services put the notifications in queue or kafka topic and these services consume from the topic. So now it is making sure only relevant services send it to queue or topic at network level

I would not recommend to use libraries for sending sms, emails and push notifications across your Microservices. This would lead to dependencies on source code level which I would try to avoid in a Microservices architecture if possible.
Concerning the architectural issues of your question:
From my experience it is a good idea to have separate services for handling notifications such as sms, email, etc. because with that you create an abstraction between your Microservices and the concrete notification infrastructure such as third party sms, email or push notification services.
Usually the core requirements to, for instance, sending an email will more or less be the same over time. But you might come into a situation where you want to exchange one third party service for another - for instance due to cost concerns, performance concerns or other reasons.
If you choose to directly communicate with the notification infrastructure from each Microservice that needs to send emails you would have to adapt all these Microservices when you switch from one email service to another, no matter if you use a shared library or each Microservices implements the communication with that service on its own.
But if you have a separate Email Microservice that is used by all your Microservices that need to send email notifications, you only have to change the Email Microservice itself to communicate with, for instance SendGrid instead of MailJet (just to name two third-party Email services). Your other Microservices aren't even concerned with that change.
Concerning the security aspects:
As it was already mentioned, if you choose to communicate with your notification services asynchronously the security aspects will be addressed on the infrastructure level by allowing the Microservices to access messaging infrastructure based on the authentication and access control mechanisms provided by the corresponding messaging services (be it RabbitMQ, Azure Service Bus, Kafka, AWS SQS, etc.)
Or if you choose to call your notification services via REST APIs from your Microservices you can look into token-based authentication via OpenID Connect (e.g. via Client credentials flow for machine-to-machine security).
One other thing to consider:
I would also think about other shared functionality that could be common to sms, email and push notification services such as user preferences - e.g. which kinds of notification does a user want to receive. This could also be some functionality you do not want all of your Microservices have to know about. So you could think of a notification service that is concernced with this kind of responsibility and would be responsible to delivery the notifications over the different kinds of channels (email, sms, push) based on the user prferences. Or you could have separate Microservice for user preferences which is than accessed by your sms, email and push notification Microservices. But there is no obvious answer to which option is better because this strongly depends on the use cases you have to deal with.

Related

Microservice Architecture - How to get user information from API gateway to microservice

I plan to set up a set of microservices with an API gateway, I am new to microservices architecture but the services I plan to add more services and keep this application highly extensible. The API gateway should manage the users and their permissions and should delegate the incoming requests to the underlying microservices. But my problem is, how can I create a relationship between the user at the gateway and an entity in a microservice.
Like in the picture above i need to figure out what is the best practice to deal with user relations in the underlying services. I want to implement all the services with laravel the gateway should use laravel\passport.
My thought was that the API gateway is responsible for authenticating the users and forwarding requests to the services behind the gateway. If the user is authenticated, he has access to the services through the gateway. But how can I provide the service with the information about the user, for example, if the user edits an item in service A, how can I store which user edited the item. What would be the approach to establish this relationship?
There are many aspects to consider when selecting an approach, so basically answering your question will mostly be giving you pointers that you can research deeper on.
Here are some approaches you should review that will greatly depend on your service:
Authentication/Authorization method for the platform as a whole
How each individual service talks to each other (sync REST calls, messaging, GraphQL, GRPC, ...)
How are individual service's secured (each service is public and does auth, every service is behind a secured network and only the gateway is public, service mesh takes care of auth, ...)
The most common auth method in REST based microservices is OAuth, with JWT tokens. I recommend that you look deeper into that.
(Now just digressing a bit to demonstrate how much this varies depending on the use case and architecture)
Taking OAuth and looking at your question, you still have different flows in OAuth that you will use according to the use case. For example, generating tokens for users will be different than for services.
Then you still need to decide which token to use in each service: will the services behind the gateway accept user tokens, or only service-to-service tokens? This has implications to the architecture that you need to evaluate.
When using user tokens you can encode the user ID in the token, and extract it from there. But if you use user tokens everywhere, then it assumes services only talk to each other as part of a user flow, and you are enforcing that through the use of a user token.
If you go with service-to-service tokens (a more common approach, I'd say) you need to pass the user ID some other way (again, this depends your chosen architecture). Thinking of REST, you can use the Headers, Request Params, Request Path, Request Body. You need to evaluate the trade-offs for each depending on the business domain of each service, which influences the API design.
If you don't use tokens at all because all your services are inside a secured network, then you still have to use some aspect of your protocol to pass the user ID (headers, parameters, etc...)

In a domain-driven microservice, should you communicate outside the domain?

If my company sole purpose is processing a specific payload, but there is lot of orchestration for it. Should the orchestration, be in a separate domain. Lets say, payment is what the company does, but there is a workflow service, for that payment payload? If that is in a seperate domain, how should the workflow service domain talk to payment service domain?
It's better to use Event Driven Design which powered by message services like RabitMQ (or Kafka, MSMQ, or ..). It's not recommended to speech microservice each other directly via APIs. On the other hand to aggregate, some information from multiple services you can use 2 techniques, first using a BFF (back end for frontend layer), Second use a materialized view to gather information from many services.

East/West communication in a AWS serverless microservice architecture

I am well aware of the fact that east/west, or service to service synchronous communication between services is not the gold standard, and should only be used sparingly in a microservice architecture. However, in every real world implementation of a microservice architecture, I have seen some use-cases which require it. For example, the user service is often needs to be communicated with by other services to get up the millisecond details on the user (I'm aware that event based sharing of that data is also a possibility, but in some cases that isn't always the right approach).
My question is, what is the best way to do function to function, service to service communication in a Lambda + API Gateway style architecture?
My guess is that making an http request back out on the domain name is not ideal, since it will require going back out over the internet to resolve DNS.
Is it using the SDK to do an invoke on the downstream function directly? Will this cause issue if the downstream function depends on an API Gateway Proxy Event structure?

What are the "real-world" solutions for not duplicating data in microservices?

Suppose that I have a microservice for messaging. The microservice knows how to send emails. The service have templates of emails that have some sort of "template engine" like pugjs, and can replace data in the body of the message.
I have an user service (used for authentication/authorization for example), and a bank account service (each user have one). Between the User microservice and Bank Account microservice it's clear that we don't have to duplicate any data than de user's uuid.
But I want now to send every day a message to each user with their account statement. The Messaging microservice needs data from the User microservice and the Bank Account microservice.
Okay... This is a small case of the real world. Now I know that to have the benefits of decoupled microservices I must follow some rules:
I can't share databases between microservices
I can't make synchronous requests between microservices
Okay... I can use a broker and each time a new user is created/updated the Messaging microservice can store that data. But really, this is a stupid thing:
I don't want to have inconsistency with this data, and keeping things sync is hard
The development time and complexity of the Messaging Microservice must now consider: listen and extract the relevant data from events, keep data consistent about other domains/services, managing the saved data on database
And think about a Messaging microservice. Really I must store all the data needed to parse the templates?
I read a lot about microservices and people creating rules for their simple examples. But I never really saw a good explanation and real-world examples like above.
So how to have the microservices above without data duplication?
In your domain example I would not let the message service know anything about bank or user details. Instead the message service should just receive instructions to send messages to recipients along with the given content. I would use a dedicated scheduled job (maybe implemented as an account notification service) that performs the work of acquiring the user and account data from the corresponding services, compiles the information for the message service and instructs it to actually send the messages. This introduces another "higher level, business purpose entity/service" but allows you to keep a clear separation of concerns.
In general it will happen frequently that your "basic" domain services are used by another service that represents a specific business purpose and requires their data. Dependency in itself is not a bad thing as long as concerns are seperated clearly and interfaces versioned, changes communicated etc.
Don't forget the whole idea of microservices is for allowing teams to have dedicated responsibilities with clear interfacing. It is about organization as much as it is about architecture.

How do you develop a microservice in isolation when it depends on other microservices?

We are evaluating a move to microservices. Each microservice would be its own project developed in isolation. During planning, we have determined that some of the microservices will communicate with other via REST calls, pub/sub, messaging (ie. a order service needs product information from product service).
If a microservice depends on retrieving data from another microservice, how can it be run in isolation during development? For example, what happens when your order service requests product details, but there is nothing to answer that request?
What you probably need is an stub rest service. Create a webapp that takes the expected output using a path that is not part of the public api. When you invoke the public api it sends what it just received
If a microservice depends on retrieving data from another microservice, how can it be run in isolation during development?
It should be always temporally isolated from other services during development and production as well.
For example, what happens when your order service requests product details, but there is nothing to answer that request?
This is a place where design flaw reveals itself: order service should not request product details from another service. Product details should be stored in the message (event) that order service will be subscribed to. Order service should be getting this message in an asynchronous manner using publish-subscribe pattern and saving it in its own database. Data about the product will be stored in 2 places as the result of that.
Please consider reading this series of articles about microservices for more details. But in a nutshell: your services should be temporally decoupled, so when your product service is down - order service can continue its operations without interruptions. This is the key thing to understand about good distributed systems design in general.

Resources