So, in the case of applications where security is of great importance - how would implement the challenge question idea. That is...you would:
Detect if the computer IP has changed and hence ask for the challenge question.
Detect if the cookie is missing.
Detect if the computer name is different.
Some combinations of the methods above?
I am currently working on a forex platform...in asp.net/c# and thinking on how to implement thi feature for best results. I think the best and only way will be to check for a cookie change - since if i base on the ip - the ip might be dinamic by the isp of the client - also if i count on computer name then it's not that bright since the computer might be used by more than the user in question...of course if i count on the cookie then the browser might be used by more than a single person...but this is why this is an additional security measure and not the very password/username authentification.
Other than that getting the computer name (if possible??) + cookie change seems to be the best method. I am tagging this as c#/java since the 2 are very common these days when it comes to authentification and security.
10x!
One thing facebook did that I thought was good... You can enable an option to have them put a cookie in your browser... Unique for each computer you use... Then if someone without a cookie in the browser logs in to your account, they send an email to you letting you know... I think they geolocate the source ip of the unknown computer and put it in the email as well... So if you live in the US, you wouldnt expect a login from Russia. Not everyone accepts cookies, but for those who do, this optional feature is great and financial firms should do it too...
My bank (and many others) rely on some form of constant two factor auth Could be as simple as your best friend's name, or if they're like my online broker, high value accounts over a certain balance threshold get a time based password token. You must login first with your password, and then with the token number.
Most financial sites used a hosted picture from their site that you choose to have displayed for your password logins... This helps reduce the risk of phishing losses.
Related
I am developing an ASP.NET Core web application with user management functionalities. My question is about the email address changing algorithm. Almost every web app I saw before have the following flow:
User authorized
User requested an email address change
User received a message on the new mailbox with the confirmation link
User clicks the link and the email address updates
But I think, this algorithm might be a bit insecure and that is what I want to discuss here.
How about this flow:
User authorized
User requested an email address change
User received a message on the old mailbox with the confirmation link
User received a message on the new mailbox with the second confirmation link
User clicks the link and the email address updates
With this additional step in the middle of the algorithm, things may be much better from the security perspective, but would it be too complex or not? How do you think what algorithm I should implement? And what would you prefer if you will be in my shoes?
The second options might sound great, and it's not too much headache to implement too. But I'll stick with the first approach due to some reason:
Common work flow pattern.
As the backend side can be wrote by many language, by various developers, so common pattern would make things more standard when we need some kind of migration, and even maintaining by new developer. If the project doesn't require ultra-secure authentication flow, the simplicity of first approach was enough.
From user convinient pespertive
Let's just imagine when changing an email address, what case the user likely want to change email address ? I was register my facebook account long ago using yahoo mail, that's no-longer active, and i need to switch to a gmail one. What's the point of sending the email back to the old one ? Cumbersome... and i can do nothing in this case except get some help from the staff.
I totally aggree with the second approach on security angle. But that's not suitable for most of the case, only implement if the project have some requirement. And even in that case, I suggest don't even do that too, build some thing like sub-admin account role and grant permission to someone have responsible. Like Google enterprise email organize some account called admin if anything wrong happen to user account. As long as it has this kind of security level requirement, it's not gonna serve massively user.
The intension of all the flow
The User got authorized first, right, that's mean we Identified what the user are, and what she capable to do. Imagine when we hide a hotel room then request to change to another due to some reason. What's the point of proving that's I booked my own room, since we all know that's the fact ? Kinda weird... right ?
To conclusion, I think we shouldn't mess with something that's become common pattern that widely acknowledged, except we have some special requirements and the project have something uniquely to satisfy, and we consider ourself, as developer that's reasonable.
The main problem with this approach is: what happens if the user no longer has access to their original email account? Perhaps it was a work/school/uni account that they no longer have, or perhaps they've just forgotten their password or otherwise lost access to it.
With your second approach, they are not going to be able to update to the new account, because they'll never receive the first confirmation link.
How about the following approach instead:
User requests an email change.
Require the user to re-authenticate with their current password (just like when they change their password).
Send a confirmation link to their new email.
Send a notification to their old email, with the details of the change, and instructions of what to do if they didn't initiate the change.
User clicks the link to update or contacts your support to say their account has been compromised.
This way you still provide them with an alert that someone is trying to change their email (and potentially a means to stop it), but a user who has lost access to their old account will still be able to update their email.
As part of an application my users can create documents with embedded images/files/text etc. Viewing and editing this content requires the user to log in. At the moment the images and files though are delivered as permanent links so if those links are shared any non-authenticated user can access them forever.
I would like to make these files secure. My initial thought was to use the login token and user's id to check if they have access and only deliver the files if they do. But then I started working on it and it seems the most practical solution would involve generating a link that will expire at some point in the future. This doesn't remove the exposure to unauthenticated access but maybe reduces it enough.
The questions that come to mind are:
Is there a common approach or a few options on how this should be implemented?
I've seen returning urls with expiration periods used
Google docs seems to do something more sophisticated for it's embedded images but I can't tell what
Others?
Basic design points?
Pros/Cons of each?
Yes, it reduces the authenticated access to a fixed time but theoretically it provides un-authenticated access. So a security professional will claim it has no authentication. This kind of timed expiry link is usually used to safeguard against one time un-authenticated access like password reset(along with an expiring token independent from the time).
What is your goal? From whom are you trying to protect the data? Is the users who already have access to files and you want to limit providing an expiry time? From the question, you need to secure the access to the files/documents which has text and embedded images in it from everyone. You are right about the timed expiry design. It will not guarantee you authentication and integrity of the document and if it is over non-secure HTTP it will not even provide you integrity of the document from a potential adversary.
you can use cookies(secure cookie) over HTTPS. As long as the user has the non-expired cookie, allow access to the files/documents. The cookie approach needs distributed cookie management if you to host the solution in multiple boxes with a reverse proxy in-front. Though cross-site scripting is a threat but still most of major web application providers are using cookie based solutions. Please note, cookie breaks the REST nature of the web-application.
Another approach (similar to cookie) is to generate authenticated tokens tied to user/documents which has access for N number of attempts for a time period set while generating the token. This method has to be used over HTTPS to avoid un-wanted listeners.
An always changing link is very costly to manage and not scalable over time because it is too much state to manage and application crash makes it even more costly. Re-directing to authentication is a safe bet for you provided you have already cookie management in place or you have one application instance to take care of.
Or you can you HTTP digest authentication provided that your framework supports it so that you do not have to worry about the cookie-hell. Please note that you may need to write up some client-side java script based on your use case.
Hello Guys I have been using ReCaptcha in my apps register forms. I have seen a lot of examples of captcha in signup forms. My question is if I implement a custom Captcha such as when a user Registers I send a confirmation email and a auto generated code/passphrase/ turing test that is converted into an image with some sort of effects to distort it. Since we are sending a confirmation email anyway why not use it for a turing test and get rid of captcha in the form?
I understand that the advantages/disadvantages can be
1) If the user has entered an incorrect email then he wont get access to turing test but that is the whole point of a confirmation email.
2) Distorted image may not be readable and/or refresh-able but since we are just distorting sth that is an autogenerated by code we can make it a little bit more readable than scanned images that captcha images.
I can only think of the above two situations. Please point out any thing else that you think should be taken into consideration.
Having a CAPTCHA that covers the registration process is important to protect you from bots whose sole purpose is to generate as many users as possible with the intent of using those users to post/add content on your site with links back to a site that they are trying to improve SEO on. This is only one way in which malicious users can utilize multiple accounts on a site for their own purposes.
The registration email protects your users as much as you by creating a way a means of resetting lost passwords, proving ownership, etc.
Both parts should be included when validating users. I also recommend running ip counting on new user attempts. Typically, locking after the 2nd user created is fairly safe as long as you provide a link that states why they have been prevented and a means of creating additional accounts on that ip.
None of these procedures is failsafe but together they provide a medium level of anti-spam protection. Of course, these days people defer user maintenance to social media sites like Google and Facebook.
This is a bit of a loaded question, but what precautions can be taken to make sure AJAX requests are more secure. An example would be a reddit style voting system where users either up-vote or down-vote an article or comment.
I need to make sure bots or users can't make more than a certain amount of requests during a time period and voting URLs can't be guessed (to thawte bots).
I did have a look at similar questions, but the ones I checked did not answer the concerns I have above.
If there anything else that I should be aware of, then please mention it.
Use session and IP logging technique.
Like from a particular IP how many votes can be done in one day (or period of time ).
You can valid IP and session at server side.
You can also o/p your js using any server side language to insert some secure (random digits ). Like as we do to avoid Form spoofing.
Ajax security is not different from synchronous form submit security.
Are your users going to be logged into your website? (I'm the one geek in the world who doesn't read reddit, so I don't know how that works.
If they're logged in, you should be able to read their credentials in your web service, and track how many votes they've made. And if they're not logged in, then just reject their vote altogether.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I see iframe/p3p trick is the most popular one around, but I personally don't like it because javascript + hidden fields + frame really make it look like a hack job. I've also come across a master-slave approach using web service to communicate (http://www.15seconds.com/issue/971108.htm) and it seems better because it's transparent to the user and it's robust against different browsers.
Is there any better approaches, and what are the pros and cons of each?
My approach designates one domain as the 'central' domain and any others as 'satellite' domains.
When someone clicks a 'sign in' link (or presents a persistent login cookie), the sign in form ultimately sends its data to a URL that is on the central domain, along with a hidden form element saying which domain it came from (just for convenience, so the user is redirected back afterwards).
This page at the central domain then proceeds to set a session cookie (if the login went well) and redirect back to whatever domain the user logged in from, with a specially generated token in the URL which is unique for that session.
The page at the satellite URL then checks that token to see if it does correspond to a token that was generated for a session, and if so, it redirects to itself without the token, and sets a local cookie. Now that satellite domain has a session cookie as well. This redirect clears the token from the URL, so that it is unlikely that the user or any crawler will record the URL containing that token (although if they did, it shouldn't matter, the token can be a single-use token).
Now, the user has a session cookie at both the central domain and the satellite domain. But what if they visit another satellite? Well, normally, they would appear to the satellite as unauthenticated.
However, throughout my application, whenever a user is in a valid session, all links to pages on the other satellite domains have a ?s or &s appended to them. I reserve this 's' query string to mean "check with the central server because we reckon this user has a session". That is, no token or session id is shown on any HTML page, only the letter 's' which cannot identify someone.
A URL receiving such an 's' query tag will, if there is no valid session yet, do a redirect to the central domain saying "can you tell me who this is?" by putting something in the query string.
When the user arrives at the central server, if they are authenticated there the central server will simply receive their session cookie. It will then send the user back to the satellite with another single use token, which the satellite will treat just as a satellite would after logging in (see above). Ie, the satellite will now set up a session cookie on that domain, and redirect to itself to remove the token from the query string.
My solution works without script, or iframe support. It does require '?s' to be added to any cross-domain URLs where the user may not yet have a cookie at that URL. I did think of a way of getting around this: when the user first logs in, set up a chain of redirects around every single domain, setting a session cookie at each one. The only reason I haven't implemented this is that it would be complicated in that you would need to be able to have a set order that these redirects would happen in and when to stop, and would prevent you from expanding beyond 15 domains or so (too many more and you become dangerously close to the 'redirect limit' of many browsers and proxies).
Follow up note: this was written 11 years ago when the web was very different - for example, XMLhttprequest was not regarded as something you could depend on, much less across domains.
That's a good solution if you have full-control of all the domains backend. In my situation I only have client (javascript/html) control on one, and full-control on another, therefore I need to use the iframe/p3p method, which sucks :(.
Ok I seem to have found a solution, you can create a script tag that loads the src of the domain you want to set/get cookies on... only safari so far seems not to be able to SET cookies, but Ie6 and FF work fine... still if you only want to GET cookies, this is a very good approach.
The example in that article seems suspicious to me because you basically redirect to a url which, in turn, passes variables back to your domain in a querystring.
In the example, that would mean that a malicious user could simply navigate to http://slave.com/return.asp?Return=blah&UID=123" and be logged in on slave.com as user 123.
Am I missing something, or is it well-known that this technique is insecure and shouldn't be used for, well, things like that example suggests (passing user id's around, presumably to make one's identity portable).
#thomasrutter
You could avoid having to manage all outbound links on satellites (via appending "s" to querystring) by making an ajax call to check the 'central' domain for auth status on page load. You could avoid redundant calls (on subsequent page loads) by making only one per session.
It would be arguably better to make the auth check request server-side prior to page load so that (a) you have more efficient access to session, and (b) you will know upon page render whether or not the user is logged in (and display content accordingly).
We use cookie chaining, but it's not a good solution since it breaks when one of the domains doesn't work for the user (due to filtering / firewalls etc.). The newer techniques (including yours) only break when the "master" server that hands out the cookies / manages logins breaks.
Note that your return.asp can be abused to redirect to any site (see this for example).
You also should validate active session information against domains b,c,d,... this way you can only login if the user has already logged in at domain a.
What you do is on the domain receiving the variables you check the referrer address as well so you can confirm the link was from your own domain and not someone simply typing the link into the address bar. This approach works well.