Description of our project
We are following micro services architecture in our project with database per service. We are trying to introduce blacklist function to our services which means if some user blacklisted to enter system, they can't use any of our micro services. We have multiple entry/exit points to our micro services such as gateway service (this gateway service will be used by frontend team), websocket message receivers, multiple spring schedulers to process the user data.
Current solution
We persist the blacklist users in db and exposed as an endpoint and we can name this as access service. Persisting this blacklist users to the database will be done by support team by calling the access service blacklist create endpoint. So whenever we receive a request from frontend, in gateway we will use the access service to check if the current user is present in the blacklist db, if it's blacklisted then we will block further access. The same goes to whatever message received from schedulers or websocket notifications i.e for example for each call we check whether the user is blacklisted.
Problem statement
We have 2 websocket notification receivers, multiple schedulers which will run for every 5 minutes which intern wants to access the same blacklist access service. Because of this we are making too many calls to our access service and causing this to be a bottleneck.
How do we avoid this case?
There are several approaches to the blocklisting problem.
First you could have one service with a blocklist and for every incoming request for every service you would do an extra call to blocklist service. Clearly, this is a huge availability and scalability risk.
Second option is push based: the blocklist service keeps notifying all other services about blocklisted users. In that case, every service can make a local decision to process a request.
The third option is to bake expiration into user sessions. Every session has three elements: expiration, access token and refresh token. Before expired every service will accept requests with valid access tokens. If an access token is expired, then the client has to get a new one by contacting a token service. That service will read refresh token and check if the user is still active and if that's the case - a new access token will be issues.
The third option is the one widely used. Most(all?) cloud providers have shorted lived credentials for this specific goal - to make sure an access can be revoked after some time.
Short lived credentials vs a dedicated service is a well known trade-off; you could read more details about very similar problem here: https://en.wikipedia.org/wiki/Certificate_revocation_list
Related
I got this error when I'm using streaming subscription with impersonation.
After the connection opened and receive notification successfully for minutes, it just pops up a bunch of this for almost all subscriptions.
How can I avoid this error?
One or more subscriptions in the request reside on another Client Access server. GetStreamingEvents won't proxy in the event of a batch request., The Availability Web Service instance doesn't have sufficient permissions to perform the request
I need to keep the connection stable and avoid this error.
Sounds like you haven't use affinity https://learn.microsoft.com/en-us/exchange/client-developer/exchange-web-services/how-to-maintain-affinity-between-group-of-subscriptions-and-mailbox-server
Also if its a multi threaded application ExchangeService isn't thread safe and shouldn't be used across multiple threads.
I have a micro service architecture which is working with Spring Zuul Gateway like below image.
My authentication service returns x-auth-token which is generated by spring authentication resolver and my token repository is redis. So users should use this service to authenticate and then use other services.
All my other services connect to same redis instance, so when they receive x-auth-token they can get user session details. I normally do the authorization by using #PreAuthorize annotation and then specifying the roles that can have access to controller or method.
Everything was so far working fine. Then I have been asked to add rate-limit functionality to this architecture. So for example a single user should not be able to make more than 1 POST request to specific api in books service. Also, if there were two book service instances, I would want to both be counted as single service when its about rate limiting.
I found tons of documents that referred me to this project called spring-cloud-zuul-ratelimit. Looking at the document I realized it does support redis as storage (good for me because I already have redis there) and it also supports handling rate limits per users.
The problem is that my zuul gateway knows nothing about the users! It has no access to redis storage. If I give it an access to redis, the problem might be solved but another one would rise: I'm gonna need to authorize user twice which takes more time and more redis traffic! once at gateway, once at each service (to check the roles and session details).
I'm looking for solutions that are most close to this list of needs:
Does not change my authentication method (I cant just switch to JWT or OAuth)
Does not duplicate authorization or redis queries
Balancing the requests between my services should not effect the rate limit. If each instance of service X is requested once for single user, then user has sent two requests.
Hopefully there is a good spring support for the answer.
I would prefer to be able to change the limits dynamically.
Zuul gateway rate limiter plugin basically tracks counter of user request based
on specific key (could be user's IP, some ID, request path or custom
combinations using custom key generator) given user requests during time interval. You can add it to the
existing zuul gateway application.
Let's say the ratelimiter-gateway is using "[clientIP][userID][method][path]" as request counter key stored in redis, e.g. "10.8.14.58:some#mail.com:POST:/books".
Here's some options I can think of:
If the client send some ID, you can use it directly as rate limiter
combination key.
If the user only send JWT token, you can verify it's claim to get the user
ID, assuming it's embedded in the token, using the same secret key to generate the JWT token in authn service
as Zuul gateway app properties (using OS env credentials, vault etc.). Or
you can just use the token as user ID.
Move the authorization logic to Spring zuul+ratelimiter service. It will
validate incoming request to author & books service, get the user ID from token. And then pass it as
another header, ex: "x-app-user-id", to the upstream services using
SpringBoot Filter. This way, the upstream services won't do any authn logic, it's just read the user id from
header. Communication between author & books service might be using the same header. This, of course, assuming the upstream servers won't be accessed
directly from the outside network.
It might also be good idea to use different redis instance as the ratelimit key storage.
As for dynamic config, based on it's documentation, you can adjust the rate limit config via
properties. I don't know if it can be adjusted dynamically at runtime via
Spring Cloud Config or other remote config implementations without the gateway
app needs to be restarted.
I want to use AWS SQS for communication between my microservices (and later possibly SNS). Each microservice can have multiple instances up.
Currently I'm trying to implement the Request/Response pattern of message queues.
As I understand it, the normal way is to have one request queue, and pass a unique response queue per service instance.
The consuming service will process the message and send the response to the given response queue. Thus, the response will always be returned to the correct instance of the requesting service.
My problem now comes with Cloudfoundry.
How it should work:
Service A needs to request data from Service B.
There is one queue named A-request-B.
Service A starts with 6 instances.
Every instance creates its own queue: B-response-A-instance[x]
Every request from an instance of A sends their response queue name in the request so response is routed to the correct queue.
This is the only way I know to guarantee that the response from B gets to the correct instance of A.
This doesn't work as Cloudfoundry doesn't allow the "create-queue" call from SQS, even if I can connect to the SQS instance to send and receive messages.
The only way to create a queue is via the command line.
So I would have to create these 6 response-queues manually beforehand.
And if I start a 7th instance of A, it will fail as it doesn't have its own response queue.
I also tried using the SQS temporary queues, but they also work by creating queues dynamically which is not possible in Cloudfoundry.
I'm currently stuck with SQS, so switching to kafka/rabbitmq or something else is not possible.
Is there any other way to pass a response to the matching service instance? Or is there another way to create queues in cloud foundry?
Summary from comments above...
This doesn't work as Cloudfoundry doesn't allow the "create-queue" call from SQS
Cloud Foundry doesn't really care what messaging system you're using, unless you're using a Marketplace service to create it. In that case, Cloud Foundry will work on your behalf to create a service instance. It does this by talking to a service broker, which does the actual creation of the service instance and user credentials.
In your case, Cloud foundry handles creating the credentials to the AWS SQS through the AWS Service Broker. Unfortunately, the credentials the broker gives you don't have the permission to create queues. The creds are only allowed to send and receive messages for the specific queue that was created by the broker.
There's not a lot you can do about this, but there's a couple options:
Don't use the Marketplace service. Instead, just go to AWS directly, create an IAM user, create your SQS resources, and give the IAM user permissions to them.
Then create a user provided service with the credentials and information for the resources you created. You can bind the user provided service to your apps just like a service created by the AWS Service broker. You'd lose the convenience of using the broker, but you won't have to jump through the hoops you listed when scaling up/down your app instances.
You could create a service instance through the broker, then create a service key. The service key is a long-lived set of credentials so you could then go into AWS, look up the IAM user associated with that service key and adjust the permissions so that you can create queues.
You would then need to create a user provided service, like the first option, insert the credentials and information for your service key and bind the user provided service to any apps that you'd like to use that service.
Don't delete the service key, or your modified user will be removed and your user provided service will stop working.
Hope that helps!
I am working on a micro-service architecture using Spring Boot. We have implemented OAuth2 in a Auth Server.
My question is - If two microservices want to communicate what should be the best way?
As of now, I have discovered below options:
If each microservice is verifying the token then we can pass the same token. But the problem is - in between same token can be expired.
If we use client_credentials grant then there we are having two issues: one is, we need to send the username in next microservice. Another one is, we need to request two times - first for getting the access token, next for actual call.
If we do the token verification in API gateway only (not in microservices) then from the API gateway we need to send the username in every microservices. And microservices implementation needs to be changed to accept that param/header.
Please suggest which option should I pick and if there is any better option please let me know.
Thanks in advance.
Not an expert either, but
If we do the token verification in API gateway only (not in microservices) then from the API gateway we need to send the username in every microservices. And microservices implementation needs to be changed to accept that param/header.
could be changed this way:
You make authentication/authorisation the problem of the gateway.
When gateway authorizes the client, it attaches JWT token to every microservice request its going to make on behalf of the client (instead of sending username). JWT will contain all information that microservices might need. If the microservice will need to call other microservice, it will need to pass that token further with the request.
So the idea is - for EVERY request that comes through the gateway, a NEW JWT is being attached to the request. Then you don't have expiry problem and tokens are easy to verify.
I am not an expert on OAuth, but I have done a fair bit of work with microservices. When working with microservices, it is often a good idea to separate your services in such a way that:
They each know as little as possible about the concepts/concerns that they delegate to other services
Their dependency graph is acyclic, whether the services are microservices or part of a well-designed monolith
Take an example of Accounts and Orders. You could have the Accounts service know about users, authentication and authorization, sessions, and contact information. If a user wants to view an order, the accounts service could take all requests, ensure that the user is authorized to do so and then request the order directly from the Orders Service.
The downsides to this approach are:
The Accounts Service must pass all orders data through to the user client, which may result in code duplication and complexity
Changes to the Orders Service API may require changes to the Accounts Service to pass through new data
Upsides are:
The Accounts Service can authenticate with the Orders Service directly using a service-level authentication mechanism like an api token
The Orders Service may live on a private network
Another approach might be to have a third service responsible for identity. The client would make requests directly to the Accounts Service and Orders Service, which would each then ask a third service, let's call it an Identity Service, if a session is valid. If not, it would forward the user to authenticate (sign on) with the Identity Service. Then on each request from the client to the Accounts Service or Orders service, these services would check with the identity service if the session is still valid.
Advantages to this approach are:
The Accounts and Orders Services do not need to know about usernames and passwords
Each service simply provides the data it is responsible for directly to the client
Downsides are:
This architecture is a bit more difficult to set up
A third approach is to implement either of these approaches into a single service. In the case of accounts and orders, I might make an argument that they are very closely related and that splitting them out into separate services may not improve your architecture.
As I said, I am certainly not an expert in OAuth, but I have worked a fair bit with services and microservices.
Currently I'm building an application in a micro service architecture.
The first application is an API that does the user authentication, receive requests to initiate/keep a realtime connection with the user (via Socket.io or SockJS) and the system store the socket id into the User object.
The second application is a WORKER doing some stuff and sometime he has to send realtime data to the user.
The question is: How should the second application (the WORKER) send realtime data to the user?
Should the WORKER send a message to the API then the API forward this message to the user?
Or the WORKER can directly send the message to the user?
Thank you
In a perfect world example, the service that are responsible to send "publish" a real time push notifications should be separated from other services. Since the micro service is a set of narrowly related methods, and there is no relation between the authentication "user" service, and the realtime push notification service. And to a deep break down, the authentication actually is a separate service, this only FYI, There might be a reason you did this way.
How the service would communicate? There is actually many ways how to implement the internal communication between the services, MQ solution, which could add more technology to your stack, like Rabbit MQ, Beanstalk, Gearman, etc...
And also you can do the communication on top of HTTP protocal, but you need to consider that the HTTP call will add more cost.
The perfect solution is that each service will have to interfaces to execute on their behalf, HTTP interface and an MQ interface (console)