The other minute I read an article on OAuth. It described especially the tokens being exchanged between client and service provider during a series of requests.
The article also mentioned that OAuth gains significant popularity in RESTful APIs as authorization layer. As I understood, REST should be kept completely stateless.
The question: Doesn't this repeated token exchange torpedo REST's "being stateless" principle? IMHO the tokens can be seen as a kind of session ID, can't they?
OAuth tokens are explicitly a session identifier, interaction is not stateless between requests in the OAuth token negotiation protocol as the requests must be performed in a specific sequence, and they do require per-client storage on the server as you need to track things like when they were issued. So yes, OAuth does violate the strict principles of a RESTful architecture.
Unfortunately there's the Real WorldTM to contend with where we need to do things like allow applications to authenticate on the behalf of individuals without requesting their password, which OAuth does fairly well. It would be impossible to implement a similarly secure authentication scheme without this kind of state. Indeed, one of the changes required by OAuth (1.0a) was to add more state to the token negotiation protocol to mitigate a security risk.
So, does it torpedo REST's stateless principle? Yes. Does that matter? Not unless you live in an ivory tower :-)
Authentication is a state that must be tracked somehow when dealing in web interactions. Ultimately if your app is restful or not, the server must be able to track each users "authenticated state" and unfortunately that requires some kind of circumvention of the underlying stateless nature of HTTP and any additional transports/techniques (like REST) on top of it.
Hence to develop any kind of authenticated app, a principle of state must be shoe horned in somewhere, and if that so happens to be OAuth on top of REST, thats how it must be!
Related
I plan to set up a set of microservices with an API gateway, I am new to microservices architecture but the services I plan to add more services and keep this application highly extensible. The API gateway should manage the users and their permissions and should delegate the incoming requests to the underlying microservices. But my problem is, how can I create a relationship between the user at the gateway and an entity in a microservice.
Like in the picture above i need to figure out what is the best practice to deal with user relations in the underlying services. I want to implement all the services with laravel the gateway should use laravel\passport.
My thought was that the API gateway is responsible for authenticating the users and forwarding requests to the services behind the gateway. If the user is authenticated, he has access to the services through the gateway. But how can I provide the service with the information about the user, for example, if the user edits an item in service A, how can I store which user edited the item. What would be the approach to establish this relationship?
There are many aspects to consider when selecting an approach, so basically answering your question will mostly be giving you pointers that you can research deeper on.
Here are some approaches you should review that will greatly depend on your service:
Authentication/Authorization method for the platform as a whole
How each individual service talks to each other (sync REST calls, messaging, GraphQL, GRPC, ...)
How are individual service's secured (each service is public and does auth, every service is behind a secured network and only the gateway is public, service mesh takes care of auth, ...)
The most common auth method in REST based microservices is OAuth, with JWT tokens. I recommend that you look deeper into that.
(Now just digressing a bit to demonstrate how much this varies depending on the use case and architecture)
Taking OAuth and looking at your question, you still have different flows in OAuth that you will use according to the use case. For example, generating tokens for users will be different than for services.
Then you still need to decide which token to use in each service: will the services behind the gateway accept user tokens, or only service-to-service tokens? This has implications to the architecture that you need to evaluate.
When using user tokens you can encode the user ID in the token, and extract it from there. But if you use user tokens everywhere, then it assumes services only talk to each other as part of a user flow, and you are enforcing that through the use of a user token.
If you go with service-to-service tokens (a more common approach, I'd say) you need to pass the user ID some other way (again, this depends your chosen architecture). Thinking of REST, you can use the Headers, Request Params, Request Path, Request Body. You need to evaluate the trade-offs for each depending on the business domain of each service, which influences the API design.
If you don't use tokens at all because all your services are inside a secured network, then you still have to use some aspect of your protocol to pass the user ID (headers, parameters, etc...)
Is it a best practice to have auth as a separate service in micro-service architecture application?
I saw in some microservices app, the authentication is part of each micro-services as inbuilt.
Yes - you'd usually want to authenticate in a separate service (many times this can be an external service). besides the obvious reason of duplication the more important reason for that is security.
Getting authentication right can be a challenge (just search for oauth, openId and/or SAML) not to mention registration flows for new users, revoking access etc.
Your question is not very specific. Hence, generally speaking, the short answer is yes.
One of the principles you should follow when implementing a microservices architecture is to avoid duplication of responsibilities, specially on a functional level.
Authentication is no exception to this. On the contrary, it's a critical function that you typically want to centralize. There are different patterns that can help with ensuring that authentication and authorization are implemented in a consistent way across all services, most commonly using an API gateway.
I am currently reading a lot about microservices but still, I don't understand some parts. I made the following draw:
Each microservice has 2 accesses:
REST: For http uses
gRPC: For intra/background communication/exchanges
If I want to login I can just send an Http Request to my Authentication service. But what about if I want to access the Stuff service that needs you to be already connected?
Let say that the user wants to display the stuff available in the database STUFF, the service Stuff will first check if the "token" of the connected user is right, by exchanging with the Authentication service, and then return the stuff or a "login requires request".
So the thing I don't understand is, if each services that needs a client already connected needs to exchange with Authentication, then it will create a huge internet traffic in order to check each user request.. So I though about make one Authentication service per service, but since I should have only one Database, then it's the database that will slow the traffic?
Also, if I understand, each micro service should be on separate servers, not the same one?
I hope I am clear, don't hesitate to ask for more details !
Thanks in advance :)
Max
Edit 1
Based on #notionquest's answer:
So it should more looks like that right?
Also, based on Peter's comment, each service can implement its own middleware (JWT as mentioned) so the API Gateway is only a "pass-through". However, I don't feel like it could be a nice for me since each service make a token check for each internal exchange, doesn't it?
For the stuff, it's easy since it checks only 1 time the token. Now, let's say that, after the user got the stuff, he choose one and wanna buy it. Then the "Buying service" will call the stuff service in order the verify the price of the item, but... It will have to check the user token since the stuff is a "on authenticated access", so it means that "Buying" service and "Stuff" service both check the token, which add an extra check.
I though about an internal guaranteed access between services but is it worth it?
Also, maybe you said to implement the middleware for each service since they have a REST access, but the API Gateway would just destroy the idea of having REST access
There are multiple solutions available for this problem. One of the solution is API Gateway pattern.
First request goes to API gateway
API Gateway authenticates & authroizes the request
Authentication is stored on cache database such as Redis, Memcache etc with expiry time on it
Saved access token is returned to client
Client can use the saved access token in the subsequent calls for the some time span (i.e. until the token is valid)
Once the token is expired, the API gateway will authenticate and share the new token to client
This solution will reduce the need to authenticate each request and improves the performance
API Gateway is the single entry point for all the services. So, you may not need separate cache for each service.
Refer the diagram in this page.
Apart from #notionquest answer, there is another approach which does not involve having an API gateway;
You can share a SESSION_SECRET among all your services, so the only task of your Authentication Service is to validate username and password against the database and then encrypt this information using SESSION_SECRET and return a jwt token. All other services won't need to interact with Authentication Service but simply check if the jwt token is valid (can be decrypted) with the SESSION_SECRET.
You then have two other options;
Store all user data you need in the token - this will increase the amount of data in transit from your client to the micro-services. This can be prohibitive depending on the size of this information
You can store only the userId, and request additional data as needed per each micro service, which depending on how often/how big your data is will generate a problem as you described.
Note that you will not always be able to use this approach but depending on your specific scenario and requirements having this architecture in mind can be useful.
Also keep in mind that rotating the SESSION_SECRET can be tricky (although necessary for security reasons). AWS has just released a service called AWS Secrets Manager, so one idea to make things simple would be to have your micro-services periodically query a service like this for the current valid SESSION_SECRET instead of having this values hardcoded or as environment variables.
While prototyping out an API & SDK, I've run into this question with several plausible solutions. I'm looking for help with some of the high level architecture. In short, it is guaranteed that some consuming applications of the API are going to want to configure their own authentication providers.
Options that I've been munching on:
Keep resource server and authorization coupled but figure out some way to delegate authentication in one of the authentication providers in my auth manager to the client application.
This sounds promising until I realized that in the particular use case, it's actually necessary that even my providing application not know the user's credentials.
Separate the resource server and make each consuming application responsible for providing an authorization server, and set those endpoints as part of the configuration when registering the consuming app with the resource provider.
This feels like an uncomfortable inversion of what is often desired when using authorization_code grant types. It also would require any "default" authorization providers to be implemented by each consuming application.
Some kind of delegating authorization server that falls back to a default if a client hasn't provided endpoints for their own authorization server.
This would probably be a good solution, but I'm not sure how to do it the "spring-security-oauth2" way or if I'd have to implement a bunch of my own stuff.
Create a default auth server, and optionally allow consuming applications to point to whichever auth server they want.
This seems viable approach in that it offers lots of customization. My concern is, how do I enforce some kind of registry with the resource server? If the auth server is the server that approves consuming applications, but I don't want to let any consuming application implement its own auth server, just some of them. Otherwise non-trusted clients could end up approving themselves!?
In case this influences any guidance, my resource provider will need a fully inflated OAuth2Authentication object (which contains user details and client details).
This image mostly explains what I'm talking about, except I want multiple authorization servers and want to leave it to the consuming application to decide which authorization server to point at. How could I check on the resource-server side of things that the authorization server proxying the requests is an approved authorization server?
ADDENDUM:
I took a look at the existing implementation that's being used for this custom authentication case and I guess we're just reading a token off their session that gets set by their own login service and building their user each time off of that. This sort of customization is a problem in that we're removing customizations from the provider side of things in favor of handling that in the consuming applications. So, I'm looking for solutions so consuming apps can define their own authentication means, to the point of even providing users that the providing application doesn't persist (which leads me to think it may need to be an entire auth server).
That being said, this seems like a potentially unsustainable inverted model (IMHO, the provider should be the maintainer of users and authorization, not the consuming apps). So, I'll probably recommend a more business oriented change.
I believe I have finally come up with a secure and maintainable way of solving this.
Let consuming applications optionally register an authentication callback with the authorization server.
Require incoming authorization requests from that application to the authorization server on behalf of a user to include a token, that token should be stored by the consuming application as a means of referencing whichever user is actively causing the API call.
When an authorization code request is received by the authorization server from an application that has registered one of these callbacks, then POST to that application's registered authentication callback and include the token that was provided by the consuming application in the request.
The consuming application should take the token that was POSTed to it's registered authentication callback and look up the corresponding user, and return a response containing the full user object on whose behalf the providing application should operate (or some kind of error code if the token is invalid).
The authorization server should then generate an authorization code and return to the callback uri submitted with the authorization code request. This means we're back on track according to step 4 in the diagram in the original question. The remaining steps can be carried out as-is.
There is a remaining question of how this might be implemented to take advantage of as much of the spring-security-oauth2 framework as possible while still achieving this extension.
I'm currently building a RESTful API to our web service, which will be accessed by 3rd party web and mobile apps. We want to have certain level of control over API consumers (i.e. those web and mobile apps), so we can do API requests throttling and/or block certain malicious clients. For that purpose we want every developer who will be accessing our API to obtain an API key from us and use it to access our API endpoints. For some API calls that are not dealing with the specific user information, that's the only required level of authentication & authorization, which I call "app"-level A&A. However, some API calls deal with information belonging to the specific users, so we need a way to allow those users to login and authorize the app to access their data, which creates a second level (or "user"-level A&A).
It makes a lot of sense to use OAuth2 for the "user"-level A&A and I think I have a pretty good understanding of what I need to do here.
I also implemented OAuth1-like scheme, where app developers receive a pair of API key & secret, supply their API key with every call and use secret to sign their requests (again, it's very OAuth1 like and I should probably just use OAuth1 for that).
Now the problem that I have is how to marry those two different mechanisms. My current hypothesis is that I continue to use API key/secret pair to sign all requests to be able to access all API endpoints and for those calls that require access to user-specific information apps will need to go through OAuth2 flow and obtain access tokens and supply them.
So, my question to the community is - does it sounds like a good solution or there are some better ways to architect this.
I'd also appreciate any links to existing solutions that I could use, instead of re-inventing the wheel (our services is Ruby/Rails-based).
Your key/secret pair isn't really giving you any confidence in the authorship of mobile apps. The secret will be embedded in the executable, then given to users, and there's really nothing you can do to prevent the user from extracting the key.
In the Stack Exchange API, we just use OAuth 2.0 and accept that all we can do is cutoff abusive users (or IPs, in earlier revisions without OAuth). We do provide keys for tracking purposes, but they're not secret (and grant nothing of value, so there's no incentive to steal them).
In terms of preventing abuse, what we do is throttle based on IP in the absence of an auth token, but switch to a per-user throttle when there is one.
When dealing with purely malicious clients, we unleash the lawyers (malicious in our case is almost always violation of cc-wiki guidelines); technical solutions aren't sophisticated enough in our estimation. Note that the incidence of malicious clients is really really low (single digits in years of operation, with millions of daily API requests).
In short, I'd ditch OAuth 1.0 and switch your throttles to a hybrid of IP and user based.