I know that this topic has been discussed a lot, but I believe that I've found a new variation of it: I have a Rails 4 application, which was upgraded from Rails 3, and which has the rails_ujs and csrf_meta_tags setup correctly.
Once the root_url is loaded in the browser, there is a javascript that fires a GET and a PUT, each one to its respective controller API in the application. By the moment those 2 API calls are fired, the session SHOULD have the _csrf_token there. And this is true, most of the times. Keep reading.
The problem is that, SOMETIMES, not always, we see some InvalidAuthenticityToken exceptions happening for the PUT request. (yes, I'm using protect_from_forgery :with => :exception on our API base controllers).
Analyzing the dump from exception_notification, I can see that the CSRF_TOKEN is setup correctly in the request header, but the most intriguing thing is that the session has only the session_id on it. Everything else is gone, including the _csrf_token.
Remember: this is happening intermittently! So I believe that it must be some kind of race condition.
This app is hosted on Heroku and running on Unicorn. I'm unable to reproduce the problem in my local environment. I've also read a lot of Rails code on github trying to understand the flows in which it resets the session, but I could not find the answer, since all CSRF protection is setup correctly and the problem happens intermittently.
It's also worth mentioning that we did not setup config.secret_key_base yet. But as this problem is happening intermittently, I don't think that this may be the root cause of it.
Also, I believe its worth mentioning that we have two controller hierarchies:
(1) all 'normal' application requests go through controllers which inherits from ApplicationController
(2) all API requests go through controllers which inherits from Api::BaseController, which inherits from ActionController::Base directly
I believe this controllers scheme is the most common for everybody...
The API endpoint for the GET request is rendering a json response. The API endpoint for the PUT request is returning head :ok.
Well, I would love if some Ruby on Rails expert could help on this.
Are you using the standard cookie-based session store? The cookie-based session store, last i looked, most definitely was subject to race-conditions, especially around AJAX requests -- and the race conditions were kind of an inherent part of the design of a cookie-based session store, with no real way to fix it.
This post from 2011 describes a cookie store race condition that also involves the authenticity token, and may be similar to yours. Their solution was turning off the CSRF protection, for certain actions anyway, which doesn't sound like a great solution to me.
This post from 2014 outlines race conditions with cookie session store, and suggests ActiveRecord or other server-side store as a solution. (As i write this, that URL is 404'ing, but available in google cache).
As you can see from the example, the session cookie is updated on every request, regardless of if the session was modified or not. Depending on when the response gets back to the client last, thats the cookie that will be used in the next call. For example, if in our previous example, if get_current_result’s response was slower than get_quiz, then our cookie would have the correct data and the next call to update_response would of work fine! So sometimes it will work and sometimes not all depending on the internet gods. This type of race condition is no fun to deal with.
The implications of this is that using cookie storage for sessions when you are doing multiple ajax call is just not safe. All information saved in the session might be wrong or nonexistent next time you check. So whats the solution?
...
A better solution would be to use a server side session store like active record or memcache. Doing so prevents the session data from being reliant on client side cookies. Session data no longer has to be passed between the client and the server which means no more potential race conditions when two ajax are simultaneously made!
I can't say for sure if you are running into that problem, but it would be worth a try to switch to the ActiveRecord cookie store, and see if your problem goes away.
Even the activerecord-based session store has at times in the past been subject to race conditions, I am not sure if the current implementation is, but they are at least conceivably solvable, whereas the race conditions in cookie-based store are fundamental.
The ActiveRecord store is probably subject to analagous race conditions to the cookie store actually -- IF you have more than one app process running (or a multi-threaded app server), so concurrent request handling is still possible, a very similar race condition is probably still possible. Although it should probably be even rarer than the race condition with the cookie store, and it is theoretically solvable, although perhaps with some domain-specific logic, unlike the race condition with the cookie store that is pretty much unsolvable if you are doing any async ajax.
For better format, write this as an answer, maybe it should be a comment though.
I met a similar problem, the root reason is my app called current_user before protect_from_forgery executed, this is current_user implementation in Devise:
def current_#{mapping}
#current_#{mapping} ||= warden.authenticate(scope: :#{mapping})
end
And Devise have a feature config.clean_up_csrf_token_on_authentication = true.
So the problem is, csrf token has been reset after current_user get invoked. because it calls warden.authenticate. Then when running protect_from_forgery. a csrf error has been raised. your session will be reset or an exception raised.
Hope this can help.
Related
I'm developing an ASP.NET MVC application and I'm planing to protect each non GET request (POST, PUT, DELETE, etc...) with AntiForegeryToken.
I've implemented an extension of the classical AntiForgery verification based on the [__RequestVerificationToken] sent in the header. This because most of my calls are async ($.ajax()) and it turns out easier for me to send the hidden field value that way.
Does it make sense to put one single #Html.AntiForgeryToken() in the _Layout.cshtml (template for all pages) and always refer to that one only ?
I've tryed to understand what wolud be different beteen this option and putting it in each form (that I don't use much since my requests are pretty much all async), but I haven't.
Can anyone clear this to me ?
Thanks
Lorenzo
You can put it in your _Layout.cshtml and generate a single token when the page is rendered, that's fine.
While there is a very slight security benefit of using a different token for every request, if your token has enough entropy (and the standard token generated by #Html.AntiForgeryToken() does), then it is practically infeasible for an attacker to guess the token even during the time of a user session. So one token per user session is still considered secure in most cases.
Actually, trying to use a new token for each request leads to all kinds of bugs in a Javascript heavy application, because the browser needs a non-neglectible time to actually change things like a cookie value or to send a request, and frequent ajax requests will lead to a race condition and you will have hard to debug bugs around token mismatches.
ASP.NET MVC still focuses on traditional form-based applications in this regard, and while you can use it to prevent CSRF in modern Javascript-heavy apps with some tweaks (like a custom attribute to actually verify a token sent in request headers), you do have to write some custom code to do that. Hopefully Microsoft will add built in support in future versions.
UPDATE
After implementing the solution with #Html.AntiForgeryToken() directly in Template page (_Layout.cshtml) I found out a possible problem bound to the use of custom Claims. The problem happens during re-validation of UserIdentity. As a reference I'll leave the link to another post in which I've been dealing with that and added there the wotrkaround for those who choose the same implementation.
Custom Claims lost on Identity re validation
Hope it helps !
I'm using Codeigniter sessions for logging in users. For reasons that have always been mysterious to me, sometimes a user session gets destroyed and they have to log in again.
Because Codeigniter sessions are cookie based I assume I need to be looking at the browser to try to understand why the cookie got destroyed.
First of all, is that true? And if so, might someone suggest a method (php, js, browser dev tools?) to log the errors that lead to each session getting destroyed?
I would try checking the cookie timeout setting in ./application/config/config.php and make sure this isn't something ridiculously low.
$config['sess_expiration'] = 7200;
There are many other potential causes for this behavior, all of which depend on your environment. For instance:
If your code runs on multiple servers behind a load balancer not
configured for "sticky sessions", then you will hit a new server
(potentially) for every request, causing your session to be
recreated.
If your website utilizes multiple domains, your cookie will not be
valid for all domains, only the one who created it.
But without knowing anything about your code or environment, I would recommend using firebug or chrome developer tools to check your cookie from your browser while checking what is being requested and responded in the network layer.
Sorry for the noobish question, this is the first time i try to implement a REST interface (in PHP). Anyway, because the stateless nature of HTTP protocol, what's the best practice in order to ensure that:
GET/ /user/{id}/friends
is always and only executed by the current authenticated user? Is session usually used as a method to restrict REST access?
You can use HTTP sessions, which are nothing more than server-side cookies. They're usually ok, but there has been a lot of reports of session hijacking lately. So my answer if you're really concerned about this is to use HMAC. It's tricky to set up, but once it is you can be sure that the message really did come from an authenticated user.
I'm writing a web app that will be making requests via AJAX and would like to lock down those calls. After a little research, I am considering using some form of random token (string) to be passed back along with the request (GUID?). Here's the important parts of my algorithm:
Assign a token to a JavaScript variable (generated server-side).
Also, store that token in a DB and give it a valid time period (i.e. 10 minutes).
If the token has still not been used and is within it's valid time window, allow the call.
Return requested information if valid, otherwise, log the request and ignore it.
With an eye toward security, does this make sense? For the token, would a GUID work - should it be something else? Is there a good way to encrypt variables in the request?
EDIT:
I understand that these AJAX requests wouldn't be truly "secure" but I would like to add basic security in the sense that I would like to prevent others from using the service I intend to write. This random token would be a basic, front-line defense against abusive calls. The data that would be requested (and even submitted to generate such data) would is HIGHLY unlikely to be repeated.
Maybe I'm wrong in using a GUID... how about a randomly generated string (token)?
If you are doing this to trust code that you sent to the client browser, then change direction. You really don't want to trust user input, which includes calls from js that you sent to the browser. The logic on the server should be made so that nothing wrong can be done through there. That said, asp.net uses a signed field, you might want to go that way if absolutely necessary.
Expanding a bit:
Asp.net tamper-proofs the viewstate, which is sent as a html hidden field (depending on the configuration). I am sure there are better links as reference, but at least it is mentioned on this one: http://msdn.microsoft.com/en-us/library/ms998288.aspx
validation. This specifies the hashing
algorithm used to generate HMACs to
make ViewState and forms
authentication tickets tamper proof.
This attribute is also used to specify
the encryption algorithm used for
ViewState encryption. This attribute
supports the following options:
SHA1–SHA1 is used to tamper proof
ViewState and, if configured, the
forms authentication ticket. When SHA1
is selected for the validation
attribute, the algorithm used is
HMACSHA1.
A link for the .net class for that algorithm http://msdn.microsoft.com/en-us/library/system.security.cryptography.hmacsha1.hmacsha1.aspx.
Update 2:
For tamper-proofing you want to sign the data (not encrypt it). Note that when using cryptography in general, you should really avoid using a custom implementation or algorithm. Regarding the steps, I would stick to:
Assign a token to a JavaScript variable (generated server-side). You include info to identify the request and the exact date&time where it was issued. The signature will validate the server side application issued the data.
Identify double submits if appropriate.
That said, the reason asp.net validates the viewstate by default, is because devs rely on info coming in there as being handled only by the application when they shouldn't. The same probably applies for your scenario, don't rely on this mechanism. If you want to evaluate whether someone can do something use authentication+authorization. If you want to know the ajax call is sending only valid options, validate them. Don't expose an API at granularity level than the one where you can appropriately authorize the actions. This mechanism is just an extra measure, in case something slipped, not a real protection.
Ps. with the HMACSHA1 above, you would instantiate it with a fixed key
It really depends on what you're trying to accomplish by security. If you mean prevent unauthorized use of the HTTP endpoints there is very little you can do about it since the user will have full access to the HTML and JavaScript used to make the calls.
If you mean preventing someone from sniffing the data in the AJAX requests then I would just use SSL.
A GUID used in the way that you're suggesting is really just reinventing a session id cookie.
"Securing" is kind of a vague term. What exactly are you trying to accomplish? Using a GUID is a perfectly fine way to prevent duplicate submissions of the same request, but that is all.
If the information being passed between the client and server is truly sensitive, you should do it over HTTPS. That's really the only answer as far as securing the actual communication is concerned.
Edit: To answer your question regarding whether a GUID is the "right" way - there is no right way to do what you're suggesting. Using any token, whether it's a GUID or something of your own creation, will not have any effect other than preventing duplicate submissions of the same request. That's it.
I am currently working on the authentication of an AJAX based site, and was wondering if anybody had any reccomendations on best practices for this sort of thing.
My original approach was a cookie based system. Essentially I set a cookie with an auth code, and every data access changed the cookie. As well, whenever there was a failed authentication, all sessions by that user were de-authenticated, to keep hijackers out. To hijack a session, somebody would have to leave themselves logged in, and a hacker would need to have the very last cookie update sent to spoof a session.
Unfortunatley, due to the nature of AJAX, when making multiple requests quickly, they might come back out of order, setting the cookie wrong, and breaking the session, so I need to reimplement.
My ideas were:
A decidedly less secure session based method
using SSL over the whole site (seems like overkill)
Using an iFrame which is ssl authenticated to do secure transactions (I just sorta assume this is possible, with a little bit of jquery hacking)
The issue is not the data being transferred, the only concern is that somebody might get control over an account that is not theirs.
A decidedly less secure session based method
Personally, I have not found using SSL for the entire site (or most of it) to be overkill. Maybe a while ago when speeds and feeds were slower. Now I wouldn't hesitate to put any part of a site under SSL.
If you've decided that using SSL for the entire site is acceptable, you might consider just using the old "Basic Authentication" where the server returns the 401 response which causes the browser to prompt for username/password. If your application can live with this type of login, is works great for AJAX and all other accesses to your site because the browser handles re-submitting requests with appropriate credentials (and it is safe if you use SSL, but only if you use SSL -- don't use Basic auth with plain http!).
SSL is a must, preventing transparent proxy connections that could be used by several users. Then I'd simply check the incoming ip address with the one that got authenticated.
Re-authenticate:
as soon as the ip address changes
on a time out bigger than n seconds without any request
individually on any important transaction
A common solution is to hash the user's session id and pass that in with every request to ensure the request is coming from a valid user (see this slideshow). This is reasonably secure from a CSRF perspective, but if someone was sniffing the data it could be intercepted. Depending on your needs, ssl is always going to be the most secure method.
What if you put a "generated" timestamp on each of the responses from the server and the AJAX application could always use the cookie with the latest timestamp.
Your best bet is using an SSL connection over a previously authenticated connection with something Apache and/or Tomcat. Form based authentication in either one, with a required SSL connection gives you a secure connection. The webapp can then provide security and identity for the session and the client side Ajax doesn't need to be concerned with security.
You might try reading the book Ajax Security,by Billy Hoffman and Bryan Sullivan. I found it changed my way of thinking about security. There are very specific suggestions for each phase of Ajax.