Creating a local Token cache using the Geneva Framework - federated-identity

Haven't seen many Geneva related questions yet, I have posted this question in the Geneva Forum as well...
I'm working on a scenario where we have a win forms app with a wide installbase, which will be issuing frequent calls to various services hosted by us centrally throughout it's operation.
The services are all using the Geneva Framework and all clients are expected to call our STS first to be issued with a token to allow access to the services.
Out of the box, using the ws2007FederationHttpBinding, the app can be configured to retrieve a token from the STS before each service call, but obviously this is not the most efficient way as we're almost duplicating the effort of calling the services.
Alternatively, I have implemented the code required to retrieve the token "manually" from the app, and then pass the same pre-retrieved token when calling operations on the services (based on the WSTrustClient sample and helpon the forum); that works well and so we do have a solution,but I believeit's not very elegant as it requires building the WCF channel in code, moving away from the wonderful WCF configuration.
I much prefer the ws2007FederationHttpBinding approach where by the client simply calls the service like any other WCF service, without knowing anything about Geneva, and the bindings takes care of the token exchange.
Then someone (Jon Simpson) gave me [what I think is] a great idea - add a service, hosted in the app itself to cache locally retrieved tokens.
The local cache service would implement the same contract as the STS; when receiveing a request it would check to see if a cahced token exists, and if so would return it, otherwise it would call the 'real' STS, retrive a new token, cache it and return it.
The client app could then still use ws2007FederationHttpBinding, but instead of having the STS as the issuer it would have the local cache;
This way I think we can achieve the best of both worlds - caching of tokens without the service-sepcific custom code; our cache should be able to handle tokens for all RPs.
I have created a very simple prototype to see if it works, and - somewhat not surprising unfortunately - I am slightly stuck -
My local service (currently a console app) gets the request, and - first time around - calls the STS to retrieve the token, caches it and succesfully returns it to the client which, subsequently, uses it to call the RP. all works well.
Second time around, however, my local cahce service tries to use the same token again, but the client side fails with a MessageSecurityException -
"Security processor was unable to find a security header in the message. This might be because the message is an unsecured fault or because there is a binding mismatch between the communicating parties. This can occur if the service is configured for security and the client is not using security."
Is there something preventing the same token to be used more than once? I doubt it because when I reused the token as per the WSTrustClient sample it worked well; what am I missing? is my idea possible? a good one?
Here's the (very basic, at this stage) main code bits of the local cache -
static LocalTokenCache.STS.Trust13IssueResponse cachedResponse = null;
public LocalTokenCache.STS.Trust13IssueResponse Trust13Issue(LocalTokenCache.STS.Trust13IssueRequest request)
{
if (TokenCache.cachedResponse == null)
{
Console.WriteLine("cached token not found, calling STS");
//create proxy for real STS
STS.WSTrust13SyncClient sts = new LocalTokenCache.STS.WSTrust13SyncClient();
//set credentials for sts
sts.ClientCredentials.UserName.UserName = "Yossi";
sts.ClientCredentials.UserName.Password = "p#ssw0rd";
//call issue on real sts
STS.RequestSecurityTokenResponseCollectionType stsResponse = sts.Trust13Issue(request.RequestSecurityToken);
//create result object - this is a container type for the response returned and is what we need to return;
TokenCache.cachedResponse = new LocalTokenCache.STS.Trust13IssueResponse();
//assign sts response to return value...
TokenCache.cachedResponse.RequestSecurityTokenResponseCollection = stsResponse;
}
else
{
}
//...and reutn
return TokenCache.cachedResponse;

This is almost embarrassing, but thanks to Dominick Baier on the forum I no now realise I've missed a huge point (I knew it didn't make sense! honestly! :-) ) -
A token gets retrieved once per service proxy, assuming it hadn't expired, and so all I needed to do is to reuse the same proxy, which I planned to do anyway, but, rather stupidly, didn't on my prototype.
In addition - I found a very interesting sample on the MSDN WCF samples - Durable Issued Token Provider, which, if I understand it correctly, uses a custom endpoint behaviour on the client side to implement token caching, which is very elegant.
I will still look at this approach as we have several services and so we could achieve even more efficiency by re-using the same token between their proxies.
So - two solutions, pretty much infornt of my eyes; hope my stupidity helps someone at some point!

I've provided a complete sample for caching the token here: http://blogs.technet.com/b/meamcs/archive/2011/11/20/caching-sts-security-token-with-an-active-web-client.aspx

Related

Spring Security + JWT: How to enrich Authentication/Principal after successful login?

I’ve got a question which seems popular, but I couldn’t find the answer. Well there’s a lot of information about it but I’m not sure what the best way is. So here’s the scenario.
We have a Single Page Application (SPA) and a RESTful Web Service (API). We use an external authentication/authorization service provider via OAuth2/JWT. But I need to persist the user ID (provided by the external authentication provider) on the database on the server side after successful login. And also I need to enrich the Authentication/Principal object in security context after successful login (for example by adding email).
There's a lot on the web about this scenario. But we have SDK for authentication/authorization already and it works perfectly (no custom code, etc). I just need to add something to the authentication object. What is the correct way to do it? Thanks.
For the record, this is what we did:
As I said there's already a SDK doing all the heavy lifting of authentication mechanics. We just need to enrich the authentication object after successful authentication. So we wrapped the AuthenticationProvider (implemented in the SDK) in our implementation (inspired by PreAuthenticatedAuthenticationProvider) and after successful authentication, we enriched the result using our UserDetails implementation (inspired by PreAuthenticatedGrantedAuthoritiesUserDetailsService). The rest was straight forward.
PS: please let me know if you don't like the idea.

sails.js session variables getting lost when API is accessed from android

I have created a REST API for an android application. There are certain req.session variables that I set at certain points and use them in the policies for further steps. Everything works fine when I access the API from a REST client like POSTMAN.
However, when it is accessed from a native android app, the req.session values that I set in one step are lost in the next step.
Any idea why this might be happening and what might be the workaround ?
Session does not work with request sent from untrusted client (in this case the Android device).
You should consider using the OAuth strategy to accomplish your target. It's a bit complicated.
Or just simply generate an accessToken for each successful login then return it to the client. For further requests, the client must attach this accessToken (usually to the header) of the requests.
This is a good SO question for the same issue: How to implement a secure REST API with node.js

Correct way to use a Google Apps Marketplace service account to connect to Gmail IMAP and other services

One of the features of our Marketplace app makes use of accessing the user's Gmail account via IMAP. We are using the google-api-java-client and google-oauth-java-client libraries and code similar to this example in the java-gmail-imap project as follows:
GoogleCredential credential = new GoogleCredential.Builder().setTransport(HTTP_TRANSPORT)
.setJsonFactory(JSON_FACTORY)
.setServiceAccountId(SERVICE_ACCOUNT_ID)
.setServiceAccountScopes(Arrays.asList(GMAIL_SCOPE))
.setServiceAccountPrivateKey(PRIVATE_KEY)
.setServiceAccountUser(emailAddress)
.build();
credential.refreshToken();
We are then using code based on the examples at https://code.google.com/p/google-mail-oauth2-tools to make the IMAP connection e.g.
IMAPStore imapStore = OAuth2Authenticator.connectToImap("imap.googlemail.com",
993, emailAddress, credential.getAccessToken(), false);
The majority of the time this appears to work correctly, however we are seeing that for a small but significant number of requests the call to Google made by refreshToken() fails with an HTTP 500 error and an HTML response where the JSON would normally be returned e.g.
<p class="large"><b>500.</b> <ins>That's an error.</ins></p>
<p class="large">The server could not process your request.
<ins>That's all we know.</ins></p>
We were advised by a developer advocate at Google that we refresh tokens are not supported for service accounts and we should be using an approach like in this example.
However, it seems like without the call to refreshToken then accessToken is not populated on the credential object and then this results in a NullPointerException when we call OAuth2Authenticator.connectToImap
From the source for GoogleCredential it did seem like executeRefreshToken() is overridden to handle service accounts i.e. instead of performing a refresh it simply requests a new token, and then this bit of code in Credential then handles populating the access token:
TokenResponse tokenResponse = executeRefreshToken();
if (tokenResponse != null) {
setFromTokenResponse(tokenResponse); ....
We were unsure whether we need to enclose our call to refreshToken() in a retry loop to work around the intermittent 500 errors or whether we need to make other changes to our code to follow the recommended approach for this scenario.
Can anyone advise?
I use the java-gmail-imap example code in production (but it is only used to display an inbox in our University portal, there isn't much interaction that would require me to reuse the same refresh token for instance).
Depending on your usage, I wonder if in your case some kind of throttling is coming into play (I've read in places that Gmail can occasionally throttle access).
Elsewhere I've seen Google APIs talk about making retries using an exponential backoff algorithm.
You have to be a little careful when comparing the usage of OAuth 2.0 with the other Google Service APIs and Gmail. Gmail is special in that it uses XOAUTH2. That said I've seen other Google API's that appear to need the refreshToken call. The documentation is a bit unclear and says things like "Refresh the access token, if necessary" (as you say it doesn't seem to work without this step but I haven't done any experimentation with re-using refresh tokens via credential.setRefreshToken(String refreshToken)).
I'd be interested to hear how you get on.

Protect Web API from unauthorized applications

I am working on a web page that uses a lot of AJAX to communicate with the server. The server, in turn, has an extensive REST/JSON API exposing the different operations called by the web client.
This web site is used by both anonymous and authenticated users. As you might expect, the web service calls issued by authenticated users require authentication, and are thus protected from unauthorized users or applications.
However, the web site has a lot of features that require no authentication, and some of these make use of anonymous web services. The only way I am using to prevent outsiders from calling this web services is by using a CSRF token. I know, the CSRF token is not very useful in this regard... with some time in hand, you can figure out how to consume the web services even if they use a CSRF token.
Of course, you can use a CAPTCHA to prevent applications or bots from autonomously using your web service. However, any human will be able to use it.
Sharing a secret key between client and server, on the other side, would be useless. This, because of the ability of any outsider to read it from the web page source code.
I would like to make these web services as difficult to invoke as posible to any 3rd party application. What would you do besides using the CSRF token? It sounds a little stupid, but hey, maybe it is stupid and I am just losing my time.
Note: given this application uses a browser and not an "executable" as the client, this question is irrelevant to the discussion. I cannot use a secret between server and client (not to my knowledge, at least)
I would take a few steps.
Force https on the site. Automatically redirect any incoming http requests to https ones (the RequireHttps attribute is handy for this)
Each page needs to (securely, hence the https) send a one-time use token to the client, to be used for the page. The script running on the client can hold this in the page memory. Any request coming back sends a hashed & salted response, along with the nonce salt. The server can repeat the steps with the saved token + salt and hash to confirm the request. (much like explunit's answer above)
(It's worth noting that the secure request from a client isn't being authenticated from a user account, merely a token sent with the full page.)
The definition for one-time could either be session or page load, depending on your security vs convenience preference. Tokens should be long and expired fairly quickly to frustrate attackers.
The SSL + Hash(token + nonce) should be enough for your needs.
This is interesting. Below is a crazy suggestion. Remember, your question is also equally crazy.
Your website, once opened through a browser, should generate a long polling connection (Comet programing). This will create a unique session between the browser and the server. When ur JS is making the ajax call, send some token (unique token every time) to the server through the long polling thread. Let the AJAX also send the same token. At the server, get the AJAX token and check whether you have a similar token in you long polling session. If yes, fulfill the request. Any coder can break this. But, it won't be easy. Chances are the freeboarders won't even see these second piece of comet code. You can implement the comet code in such a way it is not easy to detect or understand. When they call ur service, send a 'Service Unavailable' message. They will be confused. Also make the comet code https.
You can also check how long that long polling thread is open. If the session was just opened and you get a ajax call right away, you can assume it is a 3rd party call. It depends on ur website flow. If ur Ajax call happens after 1 second of page load, you can check for that pattern on server side.
Anyone coding for your public api, will have 1 to 2 secret checks that they wouldn't even know and even if they know, they might be discouraged by all the extra coding they have to do.
You might have an easier problem than the one described in the linked question since you don't need to distribute a binary to the users. Even if your app is open source, the HMAC/signature key (in the "Request Signatures" part of that answer) can be controlled by an environment/configuration setting.
To summarize:
The secret key is not actually sent between client and server. Rather, it's used to sign the requests
Be sure that the requests include some unique/random element (your CSRF key probably suffices) so that two requests for the same API data are not identical.
Sign the request with the secret key and append the signature to the request. You linked to a PHP question but not clear if what language you're using. In .Net I would use a HMAC class such as HMACSHA256.
On the API server-side use the same HMAC object to verify that the request was signed with the same secret key.
Maybe you could use counters to keep track of conversations. Only the Server and Clients will be able to predict the next iteration in a conversation. This way, I think, you can prevent third party applications to impersonate someone (Just an idea though).
At the beginning, they start talking at some iteration (i=0, for example).
Every time the client requests something, the counter is incremented by some number in both the server side and the client (i=i+some_number).
And, after a few minutes of no communication, they both know they have to reset the counter (i=0).
This is just an idea based on the concept of RSA and also placing Fraud Detection on your system. The Risk from Authorized users is minimal however they can attempt to make anonymous calls to your web-service too.
For UN-Authorised users : For each web-service call , generate a token say using RSA which changes after some time(can be configured say 30 min). This way prediction of code is minimized. I have not heard of RSA collision till now. Send this token back to the user for his browser session. For further security , we might want to attach a session id with RSA token. Since session ids are unique new anonymous calls would require new session id.
Calls can be tracked using Auditing mechanism. Also per-web service there can be a different RSA setup. How the Algorithm for Fraud Detection would work is a challenge by itself.
For Authorized Users :
Every user should be tracked by his IP Address using Header block. The RSA token principle can be applied.
The solution is very vague but worth considering.

Can you help me understand this? "Common REST Mistakes: Sessions are irrelevant"

Disclaimer: I'm new to the REST school of thought, and I'm trying to wrap my mind around it.
So, I'm reading this page, Common REST Mistakes, and I've found I'm completely baffled by the section on sessions being irrelevant. This is what the page says:
There should be no need for a client
to "login" or "start a connection."
HTTP authentication is done
automatically on every message. Client
applications are consumers of
resources, not services. Therefore
there is nothing to log in to! Let's
say that you are booking a flight on a
REST web service. You don't create a
new "session" connection to the
service. Rather you ask the "itinerary
creator object" to create you a new
itinerary. You can start filling in
the blanks but then get some totally
different component elsewhere on the
web to fill in some other blanks.
There is no session so there is no
problem of migrating session state
between clients. There is also no
issue of "session affinity" in the
server (though there are still load
balancing issues to continue).
Okay, I get that HTTP authentication is done automatically on every message - but how? Is the username/password sent with every request? Doesn't that just increase attack surface area? I feel like I'm missing part of the puzzle.
Would it be bad to have a REST service, say, /session, that accepts a GET request, where you'd pass in a username/password as part of the request, and returns a session token if the authentication was successful, that could be then passed along with subsequent requests? Does that make sense from a REST point of view, or is that missing the point?
To be RESTful, each HTTP request should carry enough information by itself for its recipient to process it to be in complete harmony with the stateless nature of HTTP.
Okay, I get that HTTP authentication
is done automatically on every message
- but how?
Yes, the username and password is sent with every request. The common methods to do so are basic access authentication and digest access authentication. And yes, an eavesdropper can capture the user's credentials. One would thus encrypt all data sent and received using Transport Layer Security (TLS).
Would it be bad to have a REST
service, say, /session, that accepts a
GET request, where you'd pass in a
username/password as part of the
request, and returns a session token
if the authentication was successful,
that could be then passed along with
subsequent requests? Does that make
sense from a REST point of view, or is
that missing the point?
This would not be RESTful since it carries state but it is however quite common since it's a convenience for users; a user does not have to login each time.
What you describe in a "session token" is commonly referred to as a login cookie. For instance, if you try to login to your Yahoo! account there's a checkbox that says "keep me logged in for 2 weeks". This is essentially saying (in your words) "keep my session token alive for 2 weeks if I login successfully." Web browsers will send such login cookies (and possibly others) with each HTTP request you ask it to make for you.
It is not uncommon for a REST service to require authentication for every HTTP request. For example, Amazon S3 requires that every request have a signature that is derived from the user credentials, the exact request to perform, and the current time. This signature is easy to calculate on the client side, can be quickly verified by the server, and is of limited use to an attacker who intercepts it (since it is based on the current time).
Many people don't understand REST principales very clearly, using a session token doesn't mean always you're stateful, the reason to send username/password with each request is only for authentication and the same for sending a token (generated by login process) just to decide if the client has permission to request data or not, you only violate REST convetions when you use weither username/password or session tokens to decide what data to show !
instead you have to use them only for athentication (to show data or not to show data)
in your case i say YES this is RESTy, but try avoiding using native php sessions in your REST API and start generating your own hashed tokens that expire in determined periode of time!
No, it doesn't miss the point. Google's ClientLogin works in exactly this way with the notable exception that the client is instructed to go to the "/session" using a HTTP 401 response. But this doesn't create a session, it only creates a way for clients to (temporarily) authenticate themselves without passing the credentials in the clear, and for the server to control the validity of these temporary credentials as it sees fit.
Okay, I get that HTTP authentication
is done automatically on every message
- but how?
"Authorization:" HTTP header send by client. Either basic (plain text) or digest.
Would it be bad to have a REST
service, say, /session, that accepts a
GET request, where you'd pass in a
username/password as part of the
request, and returns a session token
if the authentication was successful,
that could be then passed along with
subsequent requests? Does that make
sense from a REST point of view, or is
that missing the point?
The whole idea of session is to make stateful applications using stateless protocol (HTTP) and dumb client (web browser), by maintaining the state on server's side. One of the REST principles is "Every resource is uniquely addressable using a universal syntax for use in hypermedia links". Session variables are something that cannot be accessed via URI. Truly RESTful application would maintain state on client's side, sending all the necessary variables over by HTTP, preferably in the URI.
Example: search with pagination. You'd have URL in form
http://server/search/urlencoded-search-terms/page_num
It's has a lot in common with bookmarkable URLs
I think your suggestion is OK, if you want to control the client session life time. I think that RESTful architecture encourages you to develop stateless applications. As #2pence wrote "each HTTP request should carry enough information by itself for its recipient to process it to be in complete harmony with the stateless nature of HTTP" .
However, not always that is the case, sometimes the application needs to tell when client logs-in or logs-out and to maintain resources such as locks or licenses based on this information. See my follow up question for an example of such case.

Resources