A bot is spamming a form on my website, causing "A potentially dangerous request..." errors to be generated every few seconds. These errors are generated when certain characters, like '<', are posted back in certain elements, like form fields. This means that they need to be handled client side or else a "yellow screen" error will be produced before any server side validation has the opportunity to run. When the website produces and error, I log it in a database and send an email, so I've been getting overrun by the errors generated by this bot.
What I Tried So Far
Google Captcha, modified to work client side.
A hidden Honey Pot textarea, when if modified, uses JavaScript to remove the OnClick event of the Submit button. A check for this is on the onchange event of the textarea, and on the OnClientClick event of the Submit button.
A Honey Pot timer on the Submit button, when if clicked within 5 seconds of the page loading, uses JavaScript to remove the OnClick event of the Submit button.
I replaced the textarea fields with ASP.NET TextBox fields (rendered as <input type="text" />).
These methods all seemed to work temporarily, but now what's happening is the bot is injecting fake form fields. Here is an excerpt generated by our ELMAH error reporting:
<item name="ctl00$...txtMyTextBox"><value string="..." />
<item name="__VIEWSTATE"><value string="..." />
<item name="method="><value string="..." />
You can see it's injecting "method=", which is not part of the form, and is then somehow thwarting the rest of the client side validation, and allowing the server side OnClick event of the Submit button to be run, causing the "dangerous" characters to be included in the post back, producing an error.
The "A potentially dangerous request..." error is well documented and I'm not looking to troubleshoot that, per se. I already know that it can be disabled, but I don't want to do that. I'd like to focus on the root problem of preventing the bot from spamming the form, specifically, how to handle situations when it injects fake fields into the form.
Lastly, I should note that the ELMAH error handling we use is run before Application_Error(...) in the Global.asax file, so handling it there probably won't be an option.
Related
I would like to protect a form against bots without third party solutions as CAPTCHA/reCAPTCHA and after searching the web I got a few ideas.
One of them is to have a non-existence or empty page in the form's action attribute, and when the user enter a required field, fill in the correct page with JavaScript.
Are bots able to fire/trigger the onfocus event of an HTML element?
I intend using this solution together with at least one more trap like test question, Honeypots etc.
I've a Telerik RadTextbox in one of my .ascx files in a Sitefinity 5.4 website. When a form containing the RadTextbox is submitted and there is some error thrown by the server, and the user goes back and tries to resubmit the form, there is validation message appearing even if there is input showing from the initial submission. It looks like the input from the first submission is treated as watermark.
Any idea why this is happening?
Is the user going back by using the browser back button? Then the validations will still exist. Try to eliminate the errors by clients side validation first, then server side validations. If the errors still exists, you should clear the validation messages on the load of the control (if it is not a postback of course)
I'm confused by this behavior:
I have an out-of-the-box MVC3 app. I haven't really done any customization from the what the scaffolding template gives me.
In web.config, clientsidevalidationenabled and unobtrusivevalidation are both true.
I have a class with one field using the Required annotation, one using StringLength and one using RegularExpression. When I'm editing an object, the textboxes for the properties marked with StringLength & Regex report problems instantly in the UI, but the textbox for the Required doesn't.
If I hit SAVE, then "Model.IsValid" is the controller sess the problem with the missing Required and I get the UI error message next to the text box.
If I view the source of the page, I can see that the markup for the required property does have the dataval-req and other related attributes generated by the Unobtrusive validation.
Is this expected behavior? If it is, what's the reason? If it's not, what might I be doing wrong?
Thanks! :)
As long as the page is not posting back to the server, this should be the correct behavior. The required client-validation will fire only if:
You don't enter data and try to post to the server.
You enter data in the text box and then remove it.
Otherwise the user would be inundated with error messages.
I've gotten to the bottom of this behavior, just by banging on the keyboard some more. It's as expected. In the Create view, the behavior is as #Beavis describes. In the edit view, unobtrusive validation prevents the required property from being validated on tabbing BEFORE the first attempt at hitting SAVE. SAVE then does a UI validation (no postback occurs) and shows the error message next to the property. Once I've hit save that first time, that property responds to tabbing. So now if I make it valid, the message disappears on tab. If I erase the contents of the text box, the message reappears on tab.
Thanks for everyone's help.
I am a beginner using ajax and I always thought that it is completely asynchronous. But I discovered that a call can be interrupted by a page reload or a page change (like clicking on a hyperlink). I was under the impression that when an ajax call is started, it is carried out no matter what the browser does afterwards. Is that wrong?
Now to the specific problem I am having: think of an online test where users answer questions (by typing into textboxes). When a textbox loses focus, an ajax call is triggered which persists the value of the textbox to a DB. That works well when changing between textboxes. However, I also have a submit button which triggeres a post action to another page (it is the submit button). When I enter something into a textbox and click on the button afterwards, the call is not carried out. Moreover, when I type into a textbox, click somewhere else (also triggering the call) and swiftly click on the submit button, the call is also not made. Is that expected behaviour?
The reason I am using ajax in the first place is to persist the values so when something unforseeable happens, like a browser crash, the already typed in text is already saved.
Is my way of thinking wrong? How would you go about solving this problem?
Thank you for your time!
AJAX is asynchronous.
When you send an AJAX request the javascript engine sends it off and sets up a handler for the response.
However, if you send an AJAX request to the server and then navigate away from the page before it is received, nothing will happen. Why? Because with each page load the entire Javascript environment is tore down and reinitialized, it has no idea what happened on the last page.
For your problem I would intercept the form submit action and do whatever you need to do with the data, and then submit the form.
Edit: In response to your comment. You are correct. If the ajax request is sent, and you're not depending on it's return value, then it should not matter.
I'd suggest debugging your problem with Firebug to see if the AJAX call is really being sent properly, and to confirm your server is properly processing it.
Unless you do something special with persistent local storage, all javascript and ajax calls are blown away when a new page is loaded over the current page. Also when a submit is done on a form.
To save things intra-page, save the data asap. Eg, perhaps save on key-up, perhaps periodically with a timer, not just on lose-focus.
Re submitting the page: change the on-click behavior to first store, then to go to a new page.
All of the effects that you are seeing are normal.
Also, be sure to test on both slow (ie 6 or 7) and fast browsers (chrome)
I have a web form with more than 10 fields that are submitted to the database. So before submitting the values I am doing JavaScript validation.
Currently I am using JavaScript validation and shows an alert box if an error occurs in data entry.
Is it a good practice to show alert box when JavaScript validation fails or should I use asp.net validation controls to display the error messages?
I'd avoid using an alert box. It's annoying, requires an extra click, and since it's modal - stops the entire browser.Instead, highlight the erroneous fields/values, and print a message at the top, explaining that the highlighted fields need to be corrected before the user can continue.You can use Asp.Net validation, or jQuery form validation - both work equally well.
ASP.NET Validation controls are more recommended, because the alert can be intrusive. They can be set to give messages without posting back, which is ideal.
Just make sure you make it obvious when the form has failed.
from the usability perspective it is much better to use .net validation. Depending on your alert box implementation you may bombard user with many alerts which is very bad.
Also don't rely on client side validation only. Be sure to validate on the server side as well. And this is where .net validation might be handy again.
A good practice is what when your code supports fall back method. For example, if Javascript is disabled then the user should still be able to view the error messages, if any.
More over, they dont look as good as you could style your own divs and display them after some server side validation .. and then use javascript to hide/fade out this error message container div.
I think its a personal/design preference.
Sometimes its just more obvious when there is a javascript alert indicating something needs to be done. Sometimes a small red asterisk gets lost on the page.
In my own opinion, alert boxes are just too annoying to see on websites these days since they're all you see on Windows... But, if you prefer a better looking site, use something else. If you want to make sure the message gets across - then use an alert.