I've a Telerik RadTextbox in one of my .ascx files in a Sitefinity 5.4 website. When a form containing the RadTextbox is submitted and there is some error thrown by the server, and the user goes back and tries to resubmit the form, there is validation message appearing even if there is input showing from the initial submission. It looks like the input from the first submission is treated as watermark.
Any idea why this is happening?
Is the user going back by using the browser back button? Then the validations will still exist. Try to eliminate the errors by clients side validation first, then server side validations. If the errors still exists, you should clear the validation messages on the load of the control (if it is not a postback of course)
Related
A bot is spamming a form on my website, causing "A potentially dangerous request..." errors to be generated every few seconds. These errors are generated when certain characters, like '<', are posted back in certain elements, like form fields. This means that they need to be handled client side or else a "yellow screen" error will be produced before any server side validation has the opportunity to run. When the website produces and error, I log it in a database and send an email, so I've been getting overrun by the errors generated by this bot.
What I Tried So Far
Google Captcha, modified to work client side.
A hidden Honey Pot textarea, when if modified, uses JavaScript to remove the OnClick event of the Submit button. A check for this is on the onchange event of the textarea, and on the OnClientClick event of the Submit button.
A Honey Pot timer on the Submit button, when if clicked within 5 seconds of the page loading, uses JavaScript to remove the OnClick event of the Submit button.
I replaced the textarea fields with ASP.NET TextBox fields (rendered as <input type="text" />).
These methods all seemed to work temporarily, but now what's happening is the bot is injecting fake form fields. Here is an excerpt generated by our ELMAH error reporting:
<item name="ctl00$...txtMyTextBox"><value string="..." />
<item name="__VIEWSTATE"><value string="..." />
<item name="method="><value string="..." />
You can see it's injecting "method=", which is not part of the form, and is then somehow thwarting the rest of the client side validation, and allowing the server side OnClick event of the Submit button to be run, causing the "dangerous" characters to be included in the post back, producing an error.
The "A potentially dangerous request..." error is well documented and I'm not looking to troubleshoot that, per se. I already know that it can be disabled, but I don't want to do that. I'd like to focus on the root problem of preventing the bot from spamming the form, specifically, how to handle situations when it injects fake fields into the form.
Lastly, I should note that the ELMAH error handling we use is run before Application_Error(...) in the Global.asax file, so handling it there probably won't be an option.
I'm confused by this behavior:
I have an out-of-the-box MVC3 app. I haven't really done any customization from the what the scaffolding template gives me.
In web.config, clientsidevalidationenabled and unobtrusivevalidation are both true.
I have a class with one field using the Required annotation, one using StringLength and one using RegularExpression. When I'm editing an object, the textboxes for the properties marked with StringLength & Regex report problems instantly in the UI, but the textbox for the Required doesn't.
If I hit SAVE, then "Model.IsValid" is the controller sess the problem with the missing Required and I get the UI error message next to the text box.
If I view the source of the page, I can see that the markup for the required property does have the dataval-req and other related attributes generated by the Unobtrusive validation.
Is this expected behavior? If it is, what's the reason? If it's not, what might I be doing wrong?
Thanks! :)
As long as the page is not posting back to the server, this should be the correct behavior. The required client-validation will fire only if:
You don't enter data and try to post to the server.
You enter data in the text box and then remove it.
Otherwise the user would be inundated with error messages.
I've gotten to the bottom of this behavior, just by banging on the keyboard some more. It's as expected. In the Create view, the behavior is as #Beavis describes. In the edit view, unobtrusive validation prevents the required property from being validated on tabbing BEFORE the first attempt at hitting SAVE. SAVE then does a UI validation (no postback occurs) and shows the error message next to the property. Once I've hit save that first time, that property responds to tabbing. So now if I make it valid, the message disappears on tab. If I erase the contents of the text box, the message reappears on tab.
Thanks for everyone's help.
The project I'm working on is using the Java EditLive! rich text editor. I've been trying to make the EditLive form post via ajax, but am having some problems using IE8. Here are the steps we're taking:
Load the main page
The user clicks a link and the EditLive applet is loaded and attached to the page via ajax
The user finishing editing their document and clicks the submit button
The form posts via ajax (we're using jQuery.post())
The EditLive section is reloaded and the EditLive content is correct.
The form immediately posts again
The EditLive content is back to being blank.
Unfortunately (for debugging reasons), this is not happening in FireFox - there is only a single form post and the values are saved correctly.
From what I can tell debugging this in IE8, it looks like the submit event is getting called twice with 2 different forms. My thought is that the applet isn't getting destroyed correctly, though I've tried everything in my power to destroy it.
So I was wondering if anyone has any experience successfully submitting EditLive data via ajax? Or maybe this is just a limitation to the product?
Any help would be greatly appreciated!
I know this is an old issue but you likely want to look at the autoSubmit property of EL:
http://docs.ephox.com/display/EditLive7/AutoSubmit+Property
http://docs.ephox.com/display/EditLive/AutoSubmit+Property
I suspect that by using an AJAXy submit process this is somehow causing you issues with EditLive and its standard behavior. I would try turning off autoSubmit and grabbing the content yourself in your jQuery posting process.
I have a web form with more than 10 fields that are submitted to the database. So before submitting the values I am doing JavaScript validation.
Currently I am using JavaScript validation and shows an alert box if an error occurs in data entry.
Is it a good practice to show alert box when JavaScript validation fails or should I use asp.net validation controls to display the error messages?
I'd avoid using an alert box. It's annoying, requires an extra click, and since it's modal - stops the entire browser.Instead, highlight the erroneous fields/values, and print a message at the top, explaining that the highlighted fields need to be corrected before the user can continue.You can use Asp.Net validation, or jQuery form validation - both work equally well.
ASP.NET Validation controls are more recommended, because the alert can be intrusive. They can be set to give messages without posting back, which is ideal.
Just make sure you make it obvious when the form has failed.
from the usability perspective it is much better to use .net validation. Depending on your alert box implementation you may bombard user with many alerts which is very bad.
Also don't rely on client side validation only. Be sure to validate on the server side as well. And this is where .net validation might be handy again.
A good practice is what when your code supports fall back method. For example, if Javascript is disabled then the user should still be able to view the error messages, if any.
More over, they dont look as good as you could style your own divs and display them after some server side validation .. and then use javascript to hide/fade out this error message container div.
I think its a personal/design preference.
Sometimes its just more obvious when there is a javascript alert indicating something needs to be done. Sometimes a small red asterisk gets lost on the page.
In my own opinion, alert boxes are just too annoying to see on websites these days since they're all you see on Windows... But, if you prefer a better looking site, use something else. If you want to make sure the message gets across - then use an alert.
I want to create an error handler for a mvc project, but the error message should be displayed in a popup. I tried to override onException event of the controller, but I don't know how to get back to the initial page and show the popup. I don't want to be redirected to an error page. Any advices?
The only way to do this is to do an AJAX form submission. If the save fails, show a popup. If the save succeeds redirect the user to a new page. That's the only way to keep the user on the page. (Well the only non-clunky way.)
As mentioned in the top-level comment by zvolkov, this is very desktop-y. There shouldn't be a 100% correlation to how a desktop program works vs. how a web program works. While there are parallels, it's OK to deviate.
The hardcore standards folks will tell you to not rely 100% on an AJAX submit, since it isn't 100% browser compatible and can break on some smartphones, depending on what you're doing. You form should be able to gracefully fall back to a standard HTTP-POST action.