I was reading about AJAX on IBM's website. Here is what it said,
If the application fails to communicate, it might leave users unsure
about what is actually happening. If they click a Form Submit button
and nothing happens, they might assume the Web site is broken. If the
application fails to communicate that an error occurred, users
generally assume that their action succeeded. This assumption can lead
to extreme frustration if the reality is that the action did not
succeed, especially if a user has just spent a long time working on
the content of the form. If the application informs users when there
is an error or timeout, at least the user has an opportunity to copy
and paste the data and save it locally, thus avoiding one of the worst
possible user experiences.
Now this problem can also occur while using JavaScript or HTML. Why does the author refer AJAX as Ajax can ruin your site?
It's dangerous because when using AJAX to process form submissions, you are changing the usual user experience. When a user clicks on the submit button, you are in charge of informing them that something is actually happening (Placing a loading gif for example)
If the request fails, it's also your responsibility to inform the user that it failed, and perhaps offer a solution and information. If you don't, the user will be clueless as to what happened, they won't know if their form submission really did something, if the information they sent has been saved... etc
AJAX is "dangerous" because it completely relies on the developer to work fine.
If the network connection is lost for example, the AJAX request will fail and many developers forget to use a timer to check for this kind of thing, thus the user will be left alone wondering what actually happened. If the request did make it to the server, but the answer didn't return, the action might have been done (e.g. registering a new user), but the user won't know.
Because you can be more sure that your local JavaScript code will get executed as you intended. However, as Ajax may be affected by network congestion and other external problems, you can not be as sure. Unless proper precautions are taken (like checking for a timeout), certain functions might not be called at all, leaving the user confused.
Static javascript code should have the same outcome everytime it is executed (this really depends on the code, but we're talking about general/simple javascripts).
AJAX on the other hand is always subject to external factors affecting it's execution (internet connectivity problems / timeouts / server load, etc).
I have seen many AJAX scripts not handling timeouts or failed connection/reading attempts leaving the "loading bar" (if there is one) hanging
Related
I have a website where people can interact with different objects to view specific content. I would like to know which objects get the most interactions by real people. For example there are thumbnails of images and I would like to know when a user clicks on a thumbnail to view an image.
To do this I thought I would create a psql table with thumbnail_id and an IP address, where every single view is stored (to ensure every combination of thumbnail and ip is only counted once and people can't just spam click it).
And so every time a click happens, a post request on a /views endpoint with the thumbnail id attached is made in the background.
The proplem is, some people may be incentivized to create bots to auto click certain images with many different IPs.
So I was wondering if I could use recaptcha v3 to identify real users as opposed to bots which would include a token with every view request.
But I was wondering, would is this too much for my backend to handle (since it would have to talk to googles servers every time anybody views an image, which might be every few seconds for each user and I would be billed while the server waits for a response) or be too expensive, since I have to pay google on every request? Or is there some other obvious problem with this?
I'm asking since I have only ever found recaptcha used for single form validation and never for traffic measurements, even though that seems like a pretty obvious use case.
I want to implement a tool (a website that can edit a user's own websites) that receives uploads from the browser and stores them in a website specified in the request. However, I want to protect the user from other sites creating requests to my endpoint and doing dirty things with the user's data.
The industry standard for this is to include a randomized token in every rendering of the page, submit it together with the input data, and check the validity of the token on the server side before processing the submitted request.
Is there an automated mechanism for this in the Boomla framework, or is something like this planned?
Implemented, no. Planned, yes.
Currently (v0.9.1), I believe Boomla does check the Referer header, but it stops there. So long, maybe you could implement a cryptographic solution yourself?
How pressing is the issue for you?
Consider that currently, side effects are not possible (eg. send data), thus data leaks are not possible, it won't cause data loss, since we have built in version control. (We are going to expose a casual version control mechanism that works automatically, without commiting, so you'll be backed up even without commiting.) Thus, in effect, your users are safe.
Please disagree if you think otherwise.
I recently implemented a small snippet of javascript in my Master page that does an ajax request every 30 seconds to keep session alive. I know there are several questions regarding keep alive but I haven't really been able to find answers to these specific questions.
My questions are:
Is it safe to do this? As in, will this have any adverse effects if there are many concurrent users/connections?
Can I implement an extended timeout using this method or will I have to use cookies?
I don't know much about cookies, but are these relatively acceptable to use now? or will there be users who don't allow them - will they be able to use my site?
Thanks everybody!
Yes it's safe. As far as load, that's up to your hardware and how you write it, but it has no worse effect than users refreshing the page (arguably less considering the overhead of an AJAX call over a standard page load).
You can adjust the timeout in the web.config if that's what you're asking...
That's a personal call on you. Cookies have their purpose, and I find them acceptable as long as it's your domain, but do realize some people disable them and so it comes down to having a fall-back.
Some things to keep in mind though:
Banks use the same methodology to keep your session going while you're checking your finances, but usually offer a popup just before to ask if you'd like to continue.
Keeping a user forcefully logged in for longer than a normal duration can be a security risk (picture someone logging in at a library or school computer and leaving their desk--should that session continue on in to the next day [or longer]?)
about the cookies, it is very acceptable to use.
almost all sites saving cookies on the users, they have to.
there are users that dont allow them but the proggramer can overcome it, by changing the security of the browser (There's a constitutional problem in this case).
you can see if the site saving cookie in your browser.
I have found very little on this topic. I'm trying to work out a way to synchronize pages cross-web without having to constantly reload pages to get new information, since the rate at which this would be necessary would cause the page to be outrageously slow.
The flow I'm thinking is this:
User A alters info displayed on Page A.
Page A sends info to server.
Page B checks server for new info every 10ms or 100ms.
Page B loads Page A's new info.
I can see AJAX as being sufficiently fast to retrieve info from the server, but have found no way to send data to a server without having to refresh every 10ms, which, even using an iframe to avoid reloading the whole page, seems far too slow to me. Correct me if I'm wrong.
So my question is, is there any way of which I am unaware to do what I am attempting? I have seen methods involving a Java server applet, but that's a bit above my head at the moment. If that's the only way, I'll learn it, but I'd love to avoid that if possible.
There are two possible interpretations of what you wrote, the first which seems to be what you've actually said is that you want to know how to send data with an Ajax request, the second is that you want to know how to push unsolicited data from the server to the client.
Ajax can easily add data to a request it makes - just add query-string parameters, or make a POST request and use XHR's send method
Use comet - i.e. keep open a long-lived connection and send data only when there is something to send.
One of the possible way to implement what you want is to use Comet technology. For example - facebook uses it to interact with their servers.
If you are retrieving info fast using AJAX, then you are also sending info fast with AJAX...
GET requests are still telling the server something. For example, lookup RESTful web-services.
You could use updater of Prototype.
I don't quite understand why client side validation is a potential security risk or more of a security risk than server side validation? Can someone give me some scenarios?
Ideally you'd do both client and server side and never one or the other. If we take at look at these 3 scenarios, both is the only secure, user-friendly way to do it:
Client Side Only: As mentioned, it doesn't take much to get around these validations if somebody wants to send malformed data to your server (such as SQL injection). NoScript won't run the javascript validation code, and some browsers allow the user to actively change all loaded javascript and html, so a user could unhook the validation javascript from the controls.
Server Side Only: This one is more secure than Client-only by a long shot, but cuts back on user friendliness. They have to send their form to the server, have it validated and receive the error page back saying a particular field was invalid. What's annoying is that if any of those fields were password fields, their values are not repopulated by default. For example, lets say the user didn't input a phone number correctly in an account creation form. When the server spits back the page about how the phone number is wrong, the user will see that, correct the phone number and hit submit again, just to receive another error page about not having entered a password (and entering it again in it's second textbox) even though that wasn't the initial problem.
Client and Server Side: You get the security of the server side validation, something the user will be hard-pressed to interfere with, and the user friendliness of input validation without having to submit the page (whether you validate through purely local javascript or AJAX).
If you absolutely had to pick one, server side would be the way to go. But you shouldn't ever have to pick one or the other.
Using various tools, such as Fiddler, Noscript, Web Developer, etc., I could disable the client-side javascript validation, and modify the data being sent to your server. Depending on the type of data and what the server does with it, one could initiate a SQL injection attack, attempt to compromise the server security, or simply store bogus data.
A lightweight example: Say you have client-side validation to ensure that a zip code is 5 digits or 5+4 digits. If I disable the client-side script, I could leave my 24-digit value in place. If your server doesn't further check the value, and the database is capable of storing all 24 digits, then I have saved the bogus data.
If you do validation only in client-side, someone may disable javascript (or change the js code, with firebug, for example). So, all validations made in js would be useless and user can insert invalid data in your system.
I assume you're talking about a web scenario?
If you're doing client side validation with Javascript, what happens if the user has Javascript disabled? Then they can submit data to the server that has not been validated.
If they were sneaky, they could even post data directly to your server (bypassing your page completely).
If you do server side validation, in addition to or instead of client side validation, then you have an additional opportunity to defend against these scenarios.
Actually, there is a huge security advantage to client-side validation (in combination with server-side validation). If you validate carefully on the client, then ALL the traffic coming into the server should be clean. Except for the attackers. That makes it possible to do much better server-side attack detection. In the big scheme of things, that's probably the most important thing that you could possibly do to protect your applications. See the OWASP ESAPI IntrusionDetector or the OWASP AppSensor for more on this.
Oh, and obviously if the attack starts and finishes in the client, like DOM-based XSS, then you're going to have to validate and encode on the client-side.