There are several approaches for managing authentication in microservices including service to service authentication. I want to understand which approach is better.
When a user logs in to the system auth-service call is happening and the auth-service returns an opaque token (not JWT), then each API call contains that opaque token Gateway validates the token through the auth-service generates a new JWT token for downstream services. So downstream services will not have to call auth-service each time to validate the token.
From the beginning auth-service will generate a JWT token.
2.1. Gateway will validate and pass it through downstream services and all services will use the same token
2.2. Gateway validates and generates a new token each time, and also downstream services do the same.
Each approach has its pros and cons.
Questions:
Approach 1. Gateway each time needs to doo auth-service call to validate?
Approach 2. What will be the expiration time for JWT tokens? Who needs to handle the expirations, what token expires in between who needs to regenerate it again? JWT tokens must be short-lived, so whether it will not affect client behavior, or refresh tokens should come in the picture here.
So I am confused about which approach is good,or are there other approaches.Please advise me.
JWTs should be avoided to be exposed to the outside world since these could potentially include sensitive information in custom claims. Especially if you work in a large company, where you can't keep track of what ends up in different claims.
So if using opague tokens (which you should) the gateway should verify and exchange the token to a JWT that can be used internally inside your network.
All services SHOULD always be setup with zero trust in mind.
All services should if possible fetch JWKs to verify the incoming JWTs. And these JWKs should be rotated on a regular basis.
This is to avoid that if a malicious actor gets access to the internal network he shouldnt be able to do any request to any service just because he has passed the gateway.
When it comes to expiration times, these should be set by the issuer and should be kept short because one of the big problems is that its easy to issue a token, but harder to revoke one.
The flow is usually as such:
Client logs in and gets issued a AT (access token) and a RT (refresh token)
Client presents opague AT token to a service in the backend
Gateway exchanges the opague token for a JWT
Gateway passes the JWT to the service that is requested
The resource server validates the JWT
if request is denied a httpstatus of 401 is returned to the calling client
the client receives the 401 and then presents its refresh to the issuer that will issue a new AT token.
request is redone with the new AT token.
Related
We are trying to implement an application where UI is in angular and backend is in Spring boot.
We need to implement openId and oauth2 in our application.
Backend api's needs to be more secure.
I am just confusing which oauth flow to be used either authorization grant flow and password grant flow.
Can any one suggest me which one need to use in this scenario and why?
Storing tokens in the front end is not recommended. It also distributes token handling logic across FE and BE because BE will have to validate tokens on each request anyway.
Therefore to avoid handling and refreshing tokens in the frontend and to simplify the overall architecture, you can implement authorization code flow in Spring Boot. This will also reduce the risk of XSS exposure in the FE.
You could implement a dedicated endpoint that initiates the flow and receives code from identity server. It then exchanges this code for id token and stores it. Then it creates an HttpOnly, strict SameOrigin session cookie for the front end. From that point onwards all calls to the API inside your Spring Boot will automatically carry this cookie without any additional code on the FE.
To eliminate token storage on the BE, you could even put token inside the cookie. However, the token may be quite large and may need to be broken into chunks. This would not affect FE in any way.
You will need to check token expiry on each API call inside the BE and refresh token in the backend as well. This will keep user session seamless. If token can not be refreshed due to revocation or refresh token expiry, the API would have to return 401 and the FE would need to initiate re-login.
There are two spring-boot apps.
client
resource-sever
there is dev okta account that is used as auth server
(those 2 apps are standard Spring Boot client -> resource-server, almost out of the box with okta setup for them, should not be problem there)
client - securely sends messages to--> secure-sever (passing the access token in the header as prove that it's authorized to call it and get data back)
(it works as expected)
But I am trying to figure out what's going on between all them, traffic wise
I'm trying to spot that moment when resource-server checks the token it got from the client that got it from the auth server.
Here is a sequence diagram of standard oauth 2.0 flow and that part that I want to debug (arrow)
auth server
And there is a communications between client, resource-sever:
There seems I can not confirm that Resource Server (from the right) does any token validation with the auth-server (okta)..?
Question: is why? From my understanding it is supposed to validate it (somehow).
I was expecting to see a call from resource-server to auth-server (otka) with the token-validation-request (ETF RFC 7662 in October 2015) like this:
How to validate an OAuth 2.0 access token for a resource server?
I was expecting, lets say, tat for every client call, resource server would check if that token the client passes is valid. Yet I do not see any calls from resource service to okta that would use the token in its requests to okta.
This comes down to the difference between JWTs and opaque tokens.
It looks like your application is using JWTs, based on the calls I'm seeing to /keys.
When using JWT authentication the resource server will query the jwks_url (in this case /keys) on startup to retrieve a set of public keys that it can use to validate the JWT-encoded bearer tokens.
Then, when the resource server receives a bearer token in a request from the client it will validate its signature against a public key obtained from the jwks_url endpoint.
This means the resource server doesn't have to query the authorization server on every request.
You can read more about this process in the OAuth 2.0 Resource Server JWT section of the Spring Security reference documentation.
The question that you linked to refers to opaque tokens.
In this setup, the resource server must call the authorization server introspection endpoint to validate the token every time.
You can read more about this process in the OAuth 2.0 Resource Server Opaque Token section of the Spring Security reference documentation.
In a context with the following services:
API Gateway/OIDC client: connect to an external OpenId Connect Provider (to get access, refresh and id tokens) and act as proxy to forward requests to other services with the access token (Authorization code flow)
Several resource servers, incoming requests are handled by the API Gateway and include the access token (for validation, using the keys exposed by the OIDC provider)
I am using the Spring Security 5.2 Oauth2 client/resource server libraries.
What will be the recommended secure way to make all the resource servers services aware of the user information (included in the API Token).
I am evaluating several options:
Include the id_token in the request sent to the services. Each
service can then validate the token (in a filter).
Make the API Gateway act as a token issuer to make a new enhanced token based.
The resources servers will have to validate the token received with
a new key exposed by the API Gateway/Token issuer. With this
solution a custom AuthenticationManager has to be implemented.
I think option 2 is the more secure and future proof, is there any downsides I should consider? Also there are maybe other alternatives.
You should be able to achieve your goals without issuing a second level of token or sending id tokens to APIs. A common gateway solution is as follows:
Open Id Connect Provider (OICP) issues tokens to the client and does all the deep stuff like auditing of tokens issued + UIs for managing them
Client sends access token to resource server via API Gateway
API Gateway validates the access token, which can involve an introspection call to the OICP
API Gateway can send the access token to the user info endpoint of the OICP to get user info, then forward this to resource servers - perhaps via headers
API Gateway can be configured to cache claims (token + user info details) for subsequent calls with the same access token
Resource servers sometimes run in a locked down Virtual Private Cloud and don't need to revalidate the access token (if you are sure this is safe)
AWS API Gateway works like this when calling lambda functions. I actually like the pattern in terms of extensibility and it can also be used in standalone APIs.
My write up may give you some ideas, along with some sample authorizer code and a class that does the OAuth work.
I'm writing an API back-end that I want to use OpenID Connect (OIDC) to secure. I've been reading the documentation but I'm still a bit confused what process applies to each and every API request. The Open ID Connect code flow appears to be:
Which I'm fine with, as a one-time process. My back-end API sees an authorization code in the HTTP headers, and sends a request to the authorization server to get the id token. Assuming this validates OK, the data requested is returned in the API response.
But assuming the same user will then be making lots of requests to this API, what happens in subsequent requests? Is there some sort of session created in this mechanism? Do I continue to receive the same authorization code? Do I have to keep sending these back channel requests to the authorization server?
Or should I even output the JWT id token as a cookie? In this way I get the self contained id token coming back in future requests, with no need of a server side session, or further round trips.
I've been reading the documentation but I'm still a bit confused what
process applies to each and every API request
It is not the API that should follow OpenID connect protocol. It's the client that should do it.
My back-end API sees an authorization code in the HTTP headers, and
sends a request to the authorization server to get the id token.
Assuming this validates OK, the data requested is returned in the API
response.
Authorization code must be used by client application and not by the API endpoint. Also, authorization code must never be exposed to other entities.
You should use id token sent with OpenID Connect to authenticate the end user from your client application. To access API, you should use access tokens.
What to do in API endpoint ?
I think this is where you struggle. Your client application should send a valid access token to get access to API endpoint. From API endpoint, you can use OAuth 2.0 introspection endpoint to validate the tokens.
RFC7662 - OAuth 2.0 Token Introspection
This specification defines a protocol that allows authorized
protected resources to query the authorization server to determine
the set of metadata for a given token that was presented to them by
an OAuth 2.0 client.
Note that, OpenID Connect is built on top of OAuth 2.0. This means you can use anything defined in OAuth 2.0, including introspection endpoint. Use this endpoint to verify the access token validity.
What if you want end user details ?
OpenID Connect defines a user info endpoint
User info endpoint
The UserInfo Endpoint is an OAuth 2.0 Protected Resource that returns Claims about the authenticated End-User. To obtain the requested Claims about the End-User, the Client makes a request to the UserInfo Endpoint using an Access Token obtained through OpenID Connect Authentication. These Claims are normally represented by a JSON object that contains a collection of name and value pairs for the Claims.
Here also, you use access tokens to get user information from this endpoint. The response will let you know the end user to which this token was issued.
Depending on your specific API requirement, you can do a token introspection or obtain user information from user info endpoint. Once that is done you may go ahead and authenticate a session. You might use both endpoints if you need all available information.
Alternatively(instead of sessions) your API can maintain an access token cache. This will remove the need to validate tokens in each an every API call. But be aware that tokens have expiration time. You must consider about token expiration if you are choosing this solution.
p.s - Client vs Resource server
In OpenID Connect and OAuth 2.0 terms, a client could be a simple web page, desktop application or could be even server hosted application.
client
An application making protected resource requests on behalf of the
resource owner and with its authorization. The term "client" does
not imply any particular implementation characteristics (e.g.,
whether the application executes on a server, a desktop, or other
devices).
Obtaining tokens and using them is the duty of the client application.
On the other hand, resource server contains protected resources,
resource server
The server hosting the protected resources, capable of accepting
and responding to protected resource requests using access tokens.
Resource server exchange it's resources to access tokens. If we match the same scenario to basic authentication, access tokens replaces username/password sent with authentication headers.
Typically you'd secure a (pure) API with OAuth 2.0, not OpenID Connect. The Client accessing your API should obtain an OAuth 2.0 access token and in order to do that it may choose to use OpenID Connect to obtain that token. That is all independent of the API, which will only see the access token. The API (or Resource Server in OAuth 2.0 terminology) is not depicted in your diagram.
I trying to implement a token based authentication approach:
Every successful login creates new token.
If user selects "keep me logged in" or the user is using a mobile device, the token is persisted in a Redis database without an expiration date. Otherwise, the token will expire in 20 minutes.
Once user is authenticated, the token is checked from each subsequent request in my Redis database.
I'm wondering how I can identify devices. In case of mobile devices, I can use a device identifier. But how can I identify a browser?
Example: The user logs in using Chrome and selects "keep me logged in". A token is generated and persisted with the browser name in Redis. If the user logs in from Firefox, saves the token and "Firefox" in the database. I save the token in Redis whereas token is created on successful authentication. Is it fine to persist only the token and the browser where the token is being used? Or do I need to persist the IP as well?
Additional question: How to avoid attackers to steal the token from a cookie?
How token-based authentication works
In a few words, an authentication scheme based on tokens follow these steps:
The client sends their credentials (username and password) to the server.
The server authenticates the credentials and generates a token.
The server stores the previously generated token in some storage along with the user identifier and an expiration date.
The server sends the generated token to the client.
In every request, the client sends the token to the server.
The server, in each request, extracts the token from the incoming request. With the token, the server looks up the user details to perform authentication and authorization.
If the token is valid, the server accepts the request.
If the token is invalid, the server refuses the request.
The server can provide an endpoint to refresh tokens.
How to send credentials to the server
In a REST applications, each request from client to server must contain all the necessary information to be understood by the server. With it, you are not depending on any session context stored on the server and you do not break the stateless constraint of the REST architecture defined by Roy T. Fielding in his dissertation:
5.1.3 Stateless
[...] each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client. [...]
When accessing protected resources that require authentication, each request must contain all necessary data to be properly authenticated/authorized. It means the authentication will be performed for each request.
Have a look at this quote from the RFC 7235 regarding considerations for new authentication schemes:
5.1.2. Considerations for New Authentication Schemes
There are certain aspects of the HTTP Authentication Framework that
put constraints on how new authentication schemes can work:
HTTP authentication is presumed to be stateless: all of the
information necessary to authenticate a request MUST be provided
in the request, rather than be dependent on the server remembering
prior requests. [...]
And authentication data (credentials) should belong to the standard HTTP Authorization header. From the RFC 7235:
4.2. Authorization
The Authorization header field allows a user agent to authenticate
itself with an origin server -- usually, but not necessarily, after
receiving a 401 (Unauthorized) response. Its value consists of
credentials containing the authentication information of the user
agent for the realm of the resource being requested.
Authorization = credentials
[...]
Please note that the name of this HTTP header is unfortunate because it carries authentication data instead of authorization. Anyways, this is the standard header for sending credentials.
When performing a token based authentication, tokens are your credentials. In this approach, your hard credentials (username and password) are exchanged for a token that is sent in each request.
What a token looks like
An authentication token is a piece of data generated by the server which identifies a user. Basically, tokens can be opaque (which reveals no details other than the value itself, like a random string) or can be self-contained (like JSON Web Token):
Random string: A token can be issued by generating a random string and persisting it to a database with an expiration date and with a user identifier associated to it.
JSON Web Token (JWT): Defined by the RFC 7519, it's a standard method for representing claims securely between two parties. JWT is a self-contained token and enables you to store a user identifier, an expiration date and whatever you want (but don't store passwords) in a payload, which is a JSON encoded as Base64. The payload can be read by the client and the integrity of the token can be easily checked by verifying its signature on the server. You won't need to persist JWT tokens if you don't need to track them. Althought, by persisting the tokens, you will have the possibility of invalidating and revoking the access of them. To keep the track of JWT tokens, instead of persisting the whole token, you could persist the token identifier (the jti claim) and some metadata (the user you issued the token for, the expiration date, etc) if you need. To find some great resources to work with JWT, have a look at http://jwt.io.
Tip: Always consider removing old tokens in order to prevent your database from growing indefinitely.
How to accept a token
You should never accept expired tokens or tokens which were not issued by your application. If you are using JWT, you must check the token signature.
Please note, once you issue a token and give it to your client, you have no control over what the client will do with the token. No control. Seriously.
It's a common practice to check the User-Agent header field to tell which browser is being used to access your API. However, it's worth mention that HTTP headers can be easily spoofed and you should never trust your client. Browsers don't have unique identifier, but you can get a good level of fingerprinting if you want.
I don't know about your security requirements, but you always can try the following in your server to enhance the security of your API:
Check which browser the user was using when the token was issued. If the browser is different in the following requests, just refuse the token.
Get the client remote address (that is, the client IP address) when the token was issued and use a third party API to lookup the client location. If the following requests comes an address from other country, for example, refuse the token. To lookup the location by IP address, you can try free APIs such as MaxMind GeoLite2 or IPInfoDB. Mind that hitting a third party API for each request your API receives is not a good idea and can cause a severe damage to the performance. But you can minimize the impact with a cache, by storing the client remote address and its location. There are a few cache engines available nowadays. To mention a few: Guava, Infinispan, Ehcache and Spring.
When sending sensitive data over the wire, your best friend is HTTPS and it protects your application against the man-in-the-middle attack.
By the way, have I mentioned you should never trust your client?
Once server is receives the request from the client, it contains the User-Agent. This attribute will help us to identify the client.
Please refer this link: How do I detect what browser is used to access my site?