I have a system which gives an access to the set of resources via access tokens. So when clients need to access some particular resource they ask for the token (one resource - one token). I need to make one-off (or at least limited in time) token, to ensure even if tokens are leaked, they will soon become inactive.
What is the proper way to achieve that in CQRS based system? Querying the resource should not change the system state. In other words - we can't invalidate token in query handler. Can we?
These are different concerns. What I would do:
An edge check the authorisation (using the token provided) and
calls the query handler, together with the query/token info
Query returns the result
The edge publishes an event "TokenHasBeenUsed"
The edge returns the query result
Token provider consumes the event and invalidates the token.
You can also build it scheduling, invalidating the token after a while if not used. Plus, you can also have a usage counter or something and it all does not need to be blended to the edge or query handler.
Related
Should I pay for every read from NEAR protocol?
How do I view the value stored in NEAR protocol smart contract? (e.g. staking pool fees)
What is the difference between view and change methods?
Should I pay for every read from NEAR protocol?
TL;DR: No, you should not.
In NEAR protocol there are to ways to interact with smart contracts:
Submit a transaction with a FunctionCall action, which will get the specified method executed on the chunk producing nodes and the result will be provable through the blockchain (in terms of near-api-js these are "change methods")
Call query(call_function) JSON RPC method, which will get the specified method executed on the RPC node itself in a read-only environment, and the call will never be recorded/proved through the blockchain (in terms of near-api-js these are "view methods")
You can change the state and chained operations (e.g. cross-contract calls, tokens transfer, or access key addition/deletion) only through the first approach since blockchain expects the user to cover the execution costs, so the user should sign their transaction, and they will get charged for the execution.
Sometimes, you don't need to change the state, instead, you only want to read a value stored on the chain, and paying for it is suboptimal (though if you need to prove that the operation has been made it might still be desirable). In this case, you would prefer the second approach. Calling a method through JSON RPC is free of charge and provides a limited context during the contract execution, but it is enough in some scenarios (e.g. when you want to check what is the staking pool fee, or who is the owner of the contract, etc).
Up until now I have been handling authorization in the CommandHandlers.
An example is I have an aggregate "Team" containing a list of managers (AggregateIdentifier from a User). All command handlers in the Team aggregate then verify the user executing the command is manager of the team.
The userId is injected as metadata in a CommandHandlerInterceptor based on the SecurityContext.
My main concern is, when I use sagas, it becomes an additional overhead to maintain the user context across the commands issued against different aggregates. Aside from that, the manager association can expire in the period the saga is running and subsequent failing commands, leading to an incomplete state which also needs to be handled with some rollback functionality.
Is it better to do the authorization in my controller layer to avoid the additional overhead or should I see it more as good practice to let my CommandHandlers decide whether the command is valid for the aggregate?
Authorization to perform certain operations/commands is something which I'd argue isn't domain specific logic. Instead, it is more a form of cross cutting concern which you need throughout your application. Thus, placing it in the #CommandHandler annotated method is not the ideal place in my head. However, placing it close by makes a lot of sense.
You have pointed out you are already using a CommandHandlerInterceptor to populate the Spring SecurityContext, thus I am assuming you are using a CommandDispatchInterceptor to populate the command's MetaData with information when you send a command out. This is a great use of the interceptor logic indeed, so I'd keep that in place. This however set's the information, it doesn't validate it.
To that end, you could build your own Handler Enhancer, which validates security metadata on a command. You could even build a dedicated annotation you'd add next to the #CommandHandler annotation, which describes the required roles. That way, the method still portrays what roles you need for the given command, but the actual validation can be done in this Handler Enhancer for you.
Now, let's circle back to your question:
Is it better to do the authorization in my controller layer to avoid the additional overhead or should I see it more as good practice to let my CommandHandlers decide whether the command is valid for the aggregate?
I think it's fine to do it in the aggregate, potentially making it cleaner through use of a Handler Enhancer. When it comes to your concern in the Saga, well, I think you should see that separate. The Saga handles events, facts that something has happened. Ignoring that fact because somebody whom initiated the operations which led to this fact doesn't have the rights doesn't resolve the point that it still has happened. Added, you are indeed not guaranteed on the timing of the Saga at all. Maybe your Saga deals with historical events, meaning it is completely out of scope.
If possible within your system, I would regard any command the Saga wants to publish as being sent by a "system user". The Saga is not something your users (which have specific roles) will directly influence; it is all indirect. The Saga is internal to your system, hence it is the system describing the intent to perform an operation.
That's my two cents to the situation, hope this helps you out #Vincent!
We are using the ADAL Mac library to authenticate. When using this library we get a 300 error (AD_ERROR_CACHE_MULTIPLE_USERS) with the description:
The token cache store for this resource contains more than one user. Please set the 'userId' parameter to the one that will be used.
When does this happen? How should one handle this scenario?
Background
ADAL has a token cache for all access/refresh tokens on the device. The cache keys on things like the user, resource being requested, etc.
The app can get into a state in which there are multiple tokens in the cache for the same request. While these tokens may represent something some different information, the information provided in the token lookup request was ambiguous in some way. Simple example:
Cache
hash(userA,B,C) -> token pair 1
hash(userB,B,C) -> token pair 2
hash(userA,F,G) -> token pair 3
Lookup (AcquireTokenSilent)
So now we do an AcquireTokenSilent request (cache lookup). This request doesn't require every pivot of the cache. For example,
AcquireTokenSilent(B, C)
There's ambiguity in this request, it could map to token pair 1 or 2.
Handling this Error
So there's two workarounds at this point:
Provide more information in the same request.
You can do a new AcquireTokenSilent request providing some more information that allows ADAL to definitively pick a cache entry. In this case, ADAL needs a userId meaning your app would need to store or lookup this value and pass it in the request. In our example,
AcquireTokenSilent(userA, B, C)
Ignore the cache and start from scratch.
If you cannot retrieve the userId and have no way to recover, your app can perform an interactive authentication request and ask the end user to enter their credentials. If you have a valid token, this is an adverse experience as your users will need to sign in more than necessary. This would just be a standard AcquireToken request. From our example (there's no user to request,
AcquireToken(B, C)
I have a question about Spring MVC controllers scope and REST services. I have a couple of REST services, wich returns a token in the response so I can later recreate the state of the application, but I don't want the users use the same token twice, so I've decided to save an unique identifier inside the token and also in HttpServletRequest, so I can check it when I get the requests (a new identifier is generated in every request).
So, my questions are: 1) is there any other way to be sure that some user will not use the same token more than once (also considered to save that identifier in DB, but I would have lot of queries to insert, delete, verify, etc).2) is it ok for the controller that receives the requests to be a singleton, or should it be prototype? (considering that the identifier is taken from session and I don't want to mix it between different sessions).
A few words on tokens that are valid only once
It's not possible to achieve it
without keeping the track of the tokens somewhere. This security schema require some trade-offs, deal with it.
Give the user a token and keep the track of it on server side, just like a white list:
When a token is issued, add it to the white list.
When a request comes to the server with a token, check the white list and:
If the token is valid, accept the request and remove the token from the white list.
If the token is invalid, refuse the request by returning a proper status code such as 403.
Also, consider assigning an expiration date to the token and refuse any request that comes to the server with an expired token.
Regarding your performance concerns: Bear in mind that premature optimization is the root of all evil. You shouldn't optimize until you have a performance problem and you have proven that the performance problem comes from the way you store your tokens. You could start storing the tokens in the database and then consider a cache in memory, for example. But always be careful when fixing a problem that you currently don't have.
Working with JWT
If you go for JWT, there are a few Java libraries to issue and validate JWT tokens such as:
jjwt
java-jwt
jose4j
The jti claim should be used to store the token identifier on the token. When validating the token, ensure that it's valid by checking the value of the jti claim against the token identifiers you have on server side.
For the token identifier you could use UUID. In Java, it's as simple as:
String uuid = UUID.randomUUID().toString();
Since HttpSession#getId() is unique, you can use it to create an unique token:
// pseudo code
String token = httpSession.getId() + "-" + System.currentTimeMillis();
You can also create your own counter.
Here my two techniques to prevent it
Disable submit button:
We can disable submit button right before our function call HTTP request and enable it again after finish gets HTTP response. This technique is effective for the process that takes a long time to finish (more than 5 sec.). The user can not click n’ click again because of impatience to get the result. Additionally, we may show a loading box for a good experience.
Issue request token/id:
This technique actually more complicated and difficult to implement, but thanks to a good framework (such as Spring Boot) to make this easier. Before we are going to the code implementation, let’s talk about the mechanism first;
When form page is loaded, issue a new requestId
put issued requestId to HTTP header before calling the backend service
backend service identify a requestId is already registered or not
if requestId is already registered then we can mark as a violation request
In our application new JWT token is returned in a cookie each time user sends a request, even though previous has still much lifetime. If user makes multiple request in a short period of time, there exists multiple valid tokens almost full lifetime on each. Browser is of course using the latest one, but someone may still use the previous ones to impersonate user.
Is there way to invalidate the previous token when dispatching a new one, or is the only choice to dispatch new token only when there is not much lifetime on the last one?
The only way to 'invalidate' tokens is by keeping track of them statefully.
This usually means doing something like keeping a key/value cache of tokens that are valid, and invalid, and checking incoming request tokens against these lists on each request.
The downside to doing things this way is that you lose a lot of the 'stateless' benefits of JWTs (since you are still checking a centralized store for token validity), but the benefit is that you can be more 'secure' by immediately revoking tokens you no longer want service-able.
One workaround is to have your access tokens be extremely short lived (5 minutes or so), to minimize any abuse.